How do you know if your students are learning? There are two usual ways of tracking them:
1. Check how well they have learnt the specific stuff you have taught them.
2. Decide if they have moved up a generic hierarchy(such as levels) devised to describe ‘progress’ in a particular subject.
I am writing this quick blog because option two is so standard that Ofsted inspections depend on it but my school uses option one and I thought people might be interested in how differently things can be done. The demands of comparative accountability require state schools to use progress measures. The second option also arises because of a distaste within the education establishment for the idea that education is about learning a body of knowledge. The idea of actually comparing schools by checking how many students in a year group can recite, or explain Hooke’s Law or a myriad of other facts, seems almost absurd (although it is what GCSEs do). However, what is more absurd, to an outsider such as myself, is the alternative. How can I really check my year 9 history students have made ‘progress’ over time in some generic sense, that doesn’t actually hinge on whether they have learnt the latest stuff they have been taught? That apparent ‘progress’ will evaporate if students make less effort on the next topic (or my teaching is poor). Yet despite this, progress in history is expected to be linear and measurable. I do think there is such a thing as ‘being good at history’ but it is too contingent on grasping content to be independently measurable. The old national curriculum levels tried to suggest there was general progress that could be made in history, or science, or geography that was generic and not very content specific but if you are interested in reading about it, all these ideas are pretty problematic, see here, here and here. Using the idea that a child is making ‘progress’ rather than simply learning more stuff can work better in subjects where the content is more hierarchical such as reading at primary level, maths or languages although even then it can lead to short termism in approaches and can be problematic because models of progress are often inevitably flawed.
Levels are now pretty widely criticised but the point of my blog is to argue that there is no point choosing another model based on charting overall progress in a subject over time. That whole idea which started with the national curriculum is flawed.
In my school we are able to check if students are getting better at learning what they have been taught (Option 1).We do not track progress. Our tracking starts with benchmark tests on pupil entry to the school and then involves me giving my impressionistic grade of a student’s standard of work and effort about every half term. If they have learnt and understood the material really well, relative to the cohort, they get an A.
[Pause to allow people to recoil in horror…]
…I’ll continue. If I think they have learnt it fairly well they get a B. If their knowledge and understanding is fairly incomplete they get a C etc, etc . So if a student starts to get fewer As and more Bs on their half termly report cards this shows up on the data analysis. Actually it is rare for the report card to come as a surprise. I do wonder if, in practice, this approach is really what is happening anyway in many schools when they assign levels, see a brilliant description here. “The students who have learnt the stuff really well must be a level 6…”. I don’t think my school’s system is especially wonderful, it is impressionistic. However, at least it is based on judging whether a student has learnt rather than a vague and problematic notion of ‘progress’ .
In my school most interest is placed in the effort grades (1-6) also provided. A string of 4s for effort can lead to a lot of hassle for a student, so they try to avoid that. What is fascinating to me is the way my school’s system is focused on whether the student is making the effort to learn rather than whether the teacher is successfully teaching. I think this is very healthy from a student’s perspective. It is not that department heads and management are unaware of weak teaching because of this approach. We might know from a myriad of indicators who is struggling to teach effectively in our schools. Statistical evidence in my school can come from comparing results in subject tests and end of year internal exams as well external exams. Anyway, I’d say the difficulty isn’t identifying a teacher who is really struggling, it is effectively helping them to improve.
I am in favour of accountability but the uncomfortable fact is that the only, even moderately, effective way to compare schools fairly is through the setting of external tests (such as SATs GCSEs and A levels) which can compare how much stuff students in different schools have learnt and understood. I don’t say this because I love SATs or think they were that great. However, the idea that you can chart a student’s progress, in the <em>majority</em> of secondary subjects, let alone assume the stats you produce provide a common currency, is a mirage.