I find the way in which the Quizlet app has taken off fascinating. Millions (or billions?) has been pumped into ed tech but Quizlet did not take off because education technology companies marketed it to schools. Pupils and teachers had to ‘discover’ Quizlet. They appreciated it’s usefulness for that most basic purpose of education – learning. The growth of Quizlet was ‘bottom up’ while schools continue to have technological solutions looking for problems thrust upon them from above. What an indictment of the ed tech industry.
There has been a recent growth of interest in methods of ensuring students learn long term the content they have been taught. This is in part due to the influence of research in cognitive psychology but also due to some influential education bloggers such as Joe Kirby and the changing educational climate caused by a shift away from modular examinations. Wouldn’t it be wonderful if innovation in technology focused on finding simple solutions to actual problems (like Quizlet) instead of chasing Sugata Mitra’s unicorn of revolutionising learning?
In the meantime we must look for useful ways to ensure students learn key information without the help of the ed tech industry. I was very impressed by the ideas Steve Mastin shared at the Historical Association conference yesterday but I realised I had never blogged about my own approach and its pros and cons compared with others I have come across.
I developed a system of regular testing for our history and politics department about four years ago. I didn’t know about the research from cognitive psychology back then and instead used what I had learnt from using Direct Instruction programmes with my primary aged children.
Key features of this approach to regular factual testing at GCSE and A level:
- Approximately once a fortnight a class is given a learning homework, probably at the end of a topic or sub topic.
- All children are given a guidance sheet that lists exactly what areas will come up in the test and need to be learnt. Often textbook page references are provided so key material can be easily located.
- The items chosen for the test reflect the test writer’s judgement of what constitute the very key facts that could provide a minimum framework of knowledge for that topic (n.b. the students are familiar with the format and know how much material will be sufficient for an ‘explain’ question.) The way knowledge has been presented in notes or textbook can make it easier or more difficult for the students to find relevant material to learn. In the example above the textbook very conveniently summarises all they need to know.
- The test normally takes about 10-15 minutes of a lesson. The test is always out of 20 and the pass mark is high, always 14/20. Any students who fail the test have to resit it in their own time. We give rewards for full marks in the test. The test writer must try and ensure that the test represents a reasonable amount to ask all students to learn for homework or the system won’t work.
- There is no time limit for the test. I just take them in when all are finished.
I haven’t developed ‘knowledge organisers’, even though I can see the advantages of them because I don’t want to limit test items to the amount of material that can be fitted onto one sheet of paper. Additionally, I’ve always felt a bit nervous about sending the message that there is something comprehensive about the material selected for testing. I’ve found my approach has some advantages and disadvantages.
Advantages of this approach to testing:
- It is regular enough that tests never have to cover too much material and become daunting.
- I can set a test that I can reasonably expect all students in the class to pass if they do their homework.
- The regularity allows a familiar routine to develop. The students adjust to the routine quickly and they quite like it.
- The guidance sheet works better than simply telling students which facts to learn. This is because they must go back to their notes or textbook and find the information which provides a form of review and requires some active thought about the topic.
- The guidance sheet works when it is clear enough to ensure all students can find the information but some thought is still necessary to locate the key points.
- Test questions often ask students to use information in the way they will need to use it in extended writing. For example I won’t just ask questions like “When did Hitler come to power”. I will also ask questions like “Give two reasons why Hitler ordered the Night of the Long Knives”.
- Always making the test out of 20 allows students to try and beat their last total. The predictability of the pass mark also leads to acceptance of it.
- Initially we get lots of retakers but the numbers very quickly dwindle as the students realise the inevitability of the consequence of the failure to do their homework.
- The insistence on retaking any failed tests means all students really do end up having to learn a framework of key knowledge.
- I’ve found that ensuring all students learn a minimum framework of knowledge before moving on has made it easier to teach each subsequent topic. There is a lovely sense of steadily accumulating knowledge and understanding. I also seem to be getting through the course material faster despite the time taken for testing.
Disadvantages of my approach to testing:
- It can only work in a school with a culture of setting regular homework that is generally completed.
- Teachers have to mark the tests because the responses are not simple factual answers. I think this is a price worth paying for a wider range of useful test items but I can see that this becomes more challenging depending on workload.
- There is no neat and simple knowledge organiser listing key facts.
- We’re fallible. Sometimes guidance isn’t as clear as intended and you need to ensure test materials really are refined for next year and problems that arise are not just forgotten.
- If you’re not strict about your marking your class will gradually learn less and less for each point on the guidance sheet.
- This system does not have a built in mechanism for reviewing old test material in a systematic way.
We have just not really found that lower ability students (within an ability range of A*-D) have struggled. I know that other schools using similar testing with wider ability ranges have not encountered significant problems either. Sometimes students tell us that they find it hard to learn the material. A few do struggle to develop the self discipline necessary to settle down to some learning but we haven’t had a student who is incapable when they devote a reasonable amount of time. Given that those complaining are usually just making an excuse for failure to do their homework I generally respond that if they can’t learn the material for one tiny test how on earth are they proposing to learn a whole GCSE? I check that anyone that fails a test is revising efficiently but after a few retakes it transpires that they don’t, after all, have significant difficulties learning the material. Many students who are weak on paper like the tests.
We also set regular tests of chronology. At least once a week my class will put events printed onto cards into chronological order and every now and then I give them a test like the one below after a homework or two to learn the events. I don’t have to mark these myself – which is rather an advantage!
I very much liked Steve Mastin’s approach of giving multiple choice tests periodically which review old material. Good multiple choice questions can be really useful but are very hard to write. Which brings me back to my first point. Come on education technology industry! How about dropping the development of impractical, rather time consuming and gimmicky apps. We need those with funding and expertise to work in conjunction with curriculum subject experts to develop genuinely useful and subject specific forms of assessment. It must be possible develop products that can really help us assess and track success learning the key information children need to know in each subject.