One approach to regular, low stakes and short factual tests.

I find the way in which the Quizlet app has taken off fascinating. Millions (or billions?) has been pumped into ed tech but Quizlet did not take off because education technology companies marketed it to schools. Pupils and teachers had to ‘discover’ Quizlet. They appreciated it’s usefulness for that most basic purpose of education – learning. The growth of Quizlet was ‘bottom up’ while schools continue to have technological solutions looking for problems thrust upon them from above. What an indictment of the ed tech industry.

There has been a recent growth of interest in methods of ensuring students learn long term the content they have been taught. This is in part due to the influence of research in cognitive psychology but also due to some influential education bloggers such as Joe Kirby and the changing educational climate caused by a shift away from modular examinations. Wouldn’t it be wonderful if innovation in technology focused on finding simple solutions to actual problems (like Quizlet) instead of chasing Sugata Mitra’s unicorn of revolutionising learning?

In the meantime we must look for useful ways to ensure students learn key information without the help of the ed tech industry. I was very impressed by the ideas Steve Mastin shared at the Historical Association conference yesterday but I realised I had never blogged about my own approach and its pros and cons compared with others I have come across.

I developed a system of regular testing for our history and politics department about four years ago. I didn’t know about the research from cognitive psychology back then and instead used what I had learnt from using Direct Instruction programmes with my primary aged children.

Key features of this approach to regular factual testing at GCSE and A level:

  • Approximately once a fortnight a class is given a learning homework, probably at the end of a topic or sub topic.
  • All children are given a guidance sheet that lists exactly what areas will come up in the test and need to be learnt. Often textbook page references are provided so key material can be easily located.

AAAAA Test

  • The items chosen for the test reflect the test writer’s judgement of what constitute the very key facts that could provide a minimum framework of knowledge for that topic (n.b. the students are familiar with the format and know how much material will be sufficient for an ‘explain’ question.) The way knowledge has been presented in notes or textbook can make it easier or more difficult for the students to find relevant material to learn. In the example above the textbook very conveniently summarises all they need to know.
  • The test normally takes about 10-15 minutes of a lesson. The test is always out of 20 and the pass mark is high, always 14/20. Any students who fail the test have to resit it in their own time. We give rewards for full marks in the test. The test writer must try and ensure that the test represents a reasonable amount to ask all students to learn for homework or the system won’t work.
  • There is no time limit for the test. I just take them in when all are finished.

I haven’t developed ‘knowledge organisers’, even though I can see the advantages of them because I don’t want to limit test items to the amount of material that can be fitted onto one sheet of paper. Additionally, I’ve always felt a bit nervous about sending the message that there is something comprehensive about the material selected for testing. I’ve found my approach has some advantages and disadvantages.

Advantages of this approach to testing:

  • It is regular enough that tests never have to cover too much material and become daunting.
  • I can set a test that I can reasonably expect all students in the class to pass if they do their homework.
  • The regularity allows a familiar routine to develop. The students adjust to the routine quickly and they quite like it.
  • The guidance sheet works better than simply telling students which facts to learn. This is because they must go back to their notes or textbook and find the information which provides a form of review and requires some active thought about the topic.
  • The guidance sheet works when it is clear enough to ensure all students can find the information but some thought is still necessary to locate the key points.
  • Test questions often ask students to use information in the way they will need to use it in extended writing. For example I won’t just ask questions like “When did Hitler come to power”. I will also ask questions like “Give two reasons why Hitler ordered the Night of the Long Knives”.
  • Always making the test out of 20 allows students to try and beat their last total. The predictability of the pass mark also leads to acceptance of it.
  • Initially we get lots of retakers but the numbers very quickly dwindle as the students realise the inevitability of the consequence of the failure to do their homework.
  • The insistence on retaking any failed tests means all students really do end up having to learn a framework of key knowledge.
  • I’ve found that ensuring all students learn a minimum framework of knowledge before moving on has made it easier to teach each subsequent topic. There is a lovely sense of steadily accumulating knowledge and understanding. I also seem to be getting through the course material faster despite the time taken for testing.

Disadvantages of my approach to testing:

  • It can only work in a school with a culture of setting regular homework that is generally completed.
  • Teachers have to mark the tests because the responses are not simple factual answers. I think this is a price worth paying for a wider range of useful test items but I can see that this becomes more challenging depending on workload.
  • There is no neat and simple knowledge organiser listing key facts.
  • We’re fallible. Sometimes guidance isn’t as clear as intended and you need to ensure test materials really are refined for next year and problems that arise are not just forgotten.
  • If you’re not strict about your marking your class will gradually learn less and less for each point on the guidance sheet.
  • This system does not have a built in mechanism for reviewing old test material in a systematic way.

We have just not really found that lower ability students (within an ability range of A*-D) have struggled. I know that other schools using similar testing with wider ability ranges have not encountered significant problems either. Sometimes students tell us that they find it hard to learn the material. A few do struggle to develop the self discipline necessary to settle down to some learning but we haven’t had a student who is incapable when they devote a reasonable amount of time. Given that those complaining are usually just making an excuse for failure to do their homework I generally respond that if they can’t learn the material for one tiny test how on earth are they proposing to learn a whole GCSE? I check that anyone that fails a test is revising efficiently but after a few retakes it transpires that they don’t, after all, have significant difficulties learning the material. Many students who are weak on paper like the tests.

We also set regular tests of chronology. At least once a week my class will put events printed onto cards into chronological order and every now and then I give them a test like the one below after a homework or two to learn the events. I don’t have to mark these myself – which is rather an advantage!

AAAA Test Photo

 

I very much liked Steve Mastin’s approach of giving multiple choice tests periodically which review old material. Good multiple choice questions can be really useful but are very hard to write. Which brings me back to my first point. Come on education technology industry! How about dropping the development of impractical, rather time consuming and gimmicky apps. We need those with funding and expertise to work in conjunction with curriculum subject experts to develop genuinely useful and subject specific forms of assessment.  It must be possible develop products that can really help us assess and track success learning the key information children need to know in each subject.

The pseudo-expert

A week or so after our first child’s birth we met our health visitor, Penny. She was possibly in her early sixties and had worked with babies all her life. She was rather forthright in her advice but with the wisdom of 40 years behind her I was always open to her suggestions. Our baby refused to sleep in her first weeks. This meant I was getting one or two hours sleep a night myself and Penny’s reassuring advice kept me going. I can never forget one afternoon when our daughter was about 15 days old and Penny walked into our living room, taking in the situation almost immediately. “Now Heather,” she said, “I’m just going to pop baby in her Moses basket on her front. Don’t worry that she is on her front as you can keep an eye on her and if I roll up this cot blanket (deftly twisted in seconds) and put it under her tummy the pressure will make baby feel more comfortable…” Our daughter fell asleep immediately and Penny left soon after but SIX WHOLE HOURS later our baby was STILL sleeping soundly. She knew the specific risk to our baby from sleeping on her front was negligible and that it might just pull the parents back from the brink. I’m grateful to her for using her professional judgement that day.

Penny’s practical but sometimes controversial wisdom contrasted with the general quality of advice available at weekly baby clinic. Mums who were unable to think of an excuse to queue for Penny were told that ‘each baby was different’ and ‘mum and baby need to find their own way’. The other health visitors did dispense some forms of advice. If your baby wasn’t sleeping you could “try cutting out food types. Some mums swear its broccoli that does it” or “you could try homeopathy.”  The other health visitors had no time for Penny’s old fashioned belief that mothers could be told how to care for their babies. Instead of sharing acquired wisdom they uncritically passed on to mothers the latest diktats from on high (that seemed to originate from the pressure groups that held most sway over government) and a garbled mish-mash of pseudo-science.

A twitter conversation today brought back those memories. The early years teacher I was in discussion with bemoaned the lack of proper training for early years practitioners. Fair enough but what was striking was the examples she gave of the consequential poor practice. Apparently without proper training teachers wouldn’t understand about ‘developmental readiness’, ‘retained reflexes‘ or the mental health problems caused by a ‘too much too soon’ curriculum. The problem is that these examples of expertise to be gained from ‘proper’ training are actually just unproven theory or pseudo-science. The wisdom of the lady in her fifties who has worked for donkey’s at the little local day nursery is suspect if she is not ‘properly trained’. But trained in what? The modern reluctance to tell others how they should conduct themselves has created a vacuum that must be filled with pseudo-expertise masquerading as wisdom.

How often do teachers feel that they can’t point to their successful track record to prove their worth and instead must advocate shiny ‘initiatives’ based on the latest pastoral or pedagogical fads dressed up as science? The expert is far from always right but I value their wisdom. I also value the considered use of scientific research in education. Too often though these are sidelined and replaced with something far worse.