The curator of memories (and other metaphors)

Teaching is a complex job. As an experienced teacher walks out of a classroom their mind subconsciously assesses a 3D mental map the lesson has just created.

A bad lesson:

A bad lesson may lead to a morose consideration of a stubbornly undulating, terrain. The high points of the teacher’s mental topography are the children or groups of children this professional just knows have ‘got it’. The lows are failures of understanding, so painfully apparent to the professional, from all the lesson’s subtle (and less subtle) feedback cues. To tend to the understanding of all 30 children (to mix my metaphors) can often feel like spinning multiple plates.

But I realised a while ago that teachers shouldn’t only create ‘understanding’ – the transient appreciation of the content learned just now. That newly learned content needs to be remembered because ‘if nothing has been stored in long term memory, nothing has been learned’. In the last few years I think my teaching has improved because I have become not just a creator of understanding but an active, conscious curator of those newly formed understandings, freshly and precariously held in the  memories of my students.

One of the most useful ways to strengthen memory is through short, low stakes factual tests. I set fixed regular tests, I work with other teachers, helping them introduce testing. Regular pre-planned testing is also a way we can automate our teaching. This can be no bad thing as automation saves time and relieves that plate spinning stress.

However, following any practice unthinkingly, whether regular testing or an Ofsted outstanding lesson formula, is dangerous. Rather than exercising our professional judgement we follow the magic recipe (sorry, another metaphor). There are superb off the peg teaching courses out there, perhaps akin to a magic recipe any teacher could follow and get results. Nonetheless we teachers just can’t switch off. To be successful we must always consciously work at creating and then curating knowledge.

Below I outline some of the methods I use to ‘curate knowledge’.

I think very carefully about the content of each test I write and try to choose the items that will be most useful, pieces of knowledge most likely to trigger whole webs of interconnections in my students’ minds. This means the lines of explanantion I utilised in class are used in the phrasing of the test items to re-trigger the same web of knowledge in my students’ minds. Here is an example of a test I’ve used with my year 7. I wanted them to be in a position to write confidently about the causes of the Reformation in England in three weeks time. This meant teaching a whole range of ideas but then curating them, keeping them alive in the minds of my pupils so they could all be used together at essay time. Therefore I recycled test items. This test recapped old learning on Wolsey and Erasmus and reviewed fresh learning on Luther’s teachings.

Test 4 – up to Luther:

Name the very corrupt churchman who ran the government of England for Henry VIII until 1530.

Name a famous critic of corruption in the Catholic Church.

What was an indulgence?

Why did Pope Leo X sell indulgences?

Name the monk who sold indulgences in Germany.

How did Luther decide that people get to heaven?

What different belief did Catholics have about how you get to heaven?

Luther said that many beliefs of the Catholic Church were wrong because they weren’t in the bible. Give 5 examples of Catholic beliefs which Luther criticised.

1.

2.

3.

4.

5.

To keep memories alive in the minds of all your 30 students takes more than the weekly test though.

That goal of ‘active curation’ meant I thought hard about what else I could use to warm up my pupils’ memories. In this quiz I took another tack. I hoped that a reminder of the colourful descriptions I had given of key historical characters would trigger those rich interconnected memories I sought. (nb apologoies for the wonky formatting in WordPress!)

Join the character to the correct description:

Edward IV (of York)

Henry VII (of Lancaster)

Edward V and brother Richard (died 1483)

Richard III

Johannes Gutenberg

Empson and Dudley

Elizabeth of York

The ‘pretenders’ Lambert Simnel and Perkin Warbeck

Became king in 1485, defeating Richard III at the Battle of Bosworth

They led rebellions against Henry VII by suggesting they were Edward IV’s relatives

Married Henry VII. Daughter of Edward IV and mother of Henry VIII.

Became king in 1483. Brother of Edward IV. Probably killed his nephews.

Died 1483 leaving 12 year old son Edward to inherit the throne and brother in charge.

The ‘princes in the tower’. Sons of Edward IV, probably killed by their uncle Richard III.

Established the first printing press in Germany in 1450

Very unpopular officials of Henry VII who made nobles and other people pay the miserly Henry VII lots of money to help him stay powerful. They were executed when Henry VII died.

Pope Alexander VI

Pope Leo X

Prince Arthur

Catherine of Aragon

Tetzel

Erasmus

Martin Luther

Cardinal Wolsey

Eldest son of Henry VII. Died in 1502 aged 15 after marrying Catherine of Aragon.

Sold indulgences around Germany in 1517. A great salesman.

A very clever Catholic who wrote books criticising corruption in the church.

A pope who was famous for ‘debauchery’. He held all night parties and had affairs.

First wife of Henry VIII. A Spanish princess who had a daughter called Mary.

A monk who began to argue in 1517 that the teachings of the Catholic Church were wrong and you get to heaven by ‘faith alone’.

A pope who organised for indulgences to be sold to pay for St Peter’s church in Rome.

Corrupt churchman who ran England for Henry VIII until he failed to get Henry’s divorce in 1530. From poor background but very arrogant. Built Hampton Court Palace.

If we return to the plate spinning metaphor. I deliberately chose items for this quiz that I knew would give another spin to the memory plates in particular children’s minds.  Look at the bottom description of Cardinal Wolsey. Some of my class had been taken by a description of his arrogance. I made sure to include that point in my description of him here but that word ‘corrupt’ is also in there on purpose as I hoped to reawaken notions of the word learned previously. I remembered a number in the class nodding vigorously at the mention of Hampton Court Palace. So another shove of the memory plates by adding that too.  I phrased these descriptions to latch onto previously taught memory hooks of the sort I’ve outlined.

I was aware that the chronology of events was still an issue so the class worked on putting sets of 5 events in order over a homework and repeated over a series of lessons (see below). I’ve put Gutenberg in the first set to emphasise a chronological point. I wasn’t sure many in the class had really grasped that printing presses were well established by the time of Luther. Many of the other points echo the learning for the basic knowledge tests but the same details are now in the context of testing chronology. While the class thought about chronological order I was simultaneously taking the opportunity to get those memory plates spinning again.

Henry VII marries Edward IV’s daughter, Elizabeth of York. This unites the rival noble ‘houses’ of Lancaster and York. Edward IV dies Henry VII wins the Battle of Bosworth Richard III becomes king Gutenberg sets up his first printing press
         
Thetford Priory is closed Thomas Cromwell is executed The Dissolution of the Monasteries begins Henry VIII dies Henry VIII gets the Act of Supremacy passed by Parliament. This makes him head of the Church of England instead of the Pope.
         
Martin Luther nails his 95 Theses to the door of the church in the German town of Wittenberg (probably) Henry VIII marries Anne Boleyn (who is pregnant with their daughter Elizabeth) Henry VIII decides he wants to divorce Catherine of Aragon Henry VIII becomes king Pope Clement (prisoner of Holy Roman Emperor Charles V)

refuses to grant Henry VIII a divorce

         
Pope Leo X commissions Tetzel to sell indulgences around Germany to pay for rebuilding St Peter’s Church Henry VIII becomes King Pope Clement refuses to give Henry a divorce Pope Clement becomes a prisoner of Holy Roman Emperor Charles V whose army have captured Rome Henry VIII gets the Act of Supremacy passed by Parliament. This makes him head of the Church of England instead of the Pope.
         

Once I am happy my class have some confidence with these bite sized chronologies they can begin to practise putting longer strings of events into order that are in a card sort format. I’ll keep adding to this card sort below for the rest of the year. That means whenever this is a starter activity all that old knowledge is reawakened. Note that I can’t resist using the marriage to Anne Boleyn event card to give sneaky fresh spin to the Elizabeth I memory plate…

Gutenberg invents the printing press in German
Edward IV dies leaving his young son, Edward V to be king.
Richard III makes himself king, probably murdering Edward V.
Henry Tudor beats Richard III at the Battle of Bosworth and becomes Henry VII.
Henry VIII becomes king
Henry VIII marries Catherine of Aragon
Pope Leo X commissions Tetzel to sell indulgences around Germany to pay for the restoration of St Peter’s Basilica in Rome.
Luther publicises his 95 theses criticising Catholic beliefs
Pope Clement becomes prisoner of Emperor Charles V. He refuses Cardinal Wolsey’s request of a divorce for Henry VIII.
Henry marries Anne Boleyn, pregnant with Elizabeth.
Parliament passes the Act of Supremacy in 1534 making Henry VIII head of the Church of England instead of the Pope.
Dissolution of the Monasteries begins

The quiz below got a number of outings. It will come out again to prepare the ground for Puritanism and Archbishop Laud. I knew how useful developed notions of these terms would be later and so I curated that knowledge as best I could ready for future use and development.

Quiz! Which ideas are:

Catholic (C) or Luther’s Protestant ideas (P)

1.     Priests are allowed to marry and are encouraged to live like ordinary people.

2.     The head of the church is the Pope, who lives in the Vatican City, Rome.

3.     The bible SHOULD be translated from Latin into ordinary language.

4.     Church services (called Mass) should be in Latin, as should the bible.

5.     Nuns or monks should live religious lives in monasteries or abbeys.

6.     Churches are plain so as not to distract people from thinking about God for themselves.

7.     What is written in the bible should replace traditional practices.

8.     Churches are colourful and decorated with lots of gold and painting.

9.     You get out of purgatory by doing good works

10.   People should pray to the Virgin Mary, pray to saints and keep relics.

11.   You get to heaven through faith alone – what you believe – not what you do

Meanwhile I kept going with the standard regular testing which is set as homework learning. There are enormous benefits to building habitual working practices. You might think I had no time to teach the actual material with all that supplementary recap but I only averaged one recap session within each lesson. You do also move faster when your class carry in their heads so much useful and relevant foundational knowledge.

Memory curation starts with careful planning of the knowledge you want children to remember. It involves presentation of that knowledge in ways that make it memorable, consciously creating memory hooks as you teach. Tending memories means planning new tasks that utilise old learning wherever possible. It means an ongoing awareness of the likely memories as well as understanidng of each of 30 class members. What memory plates are spinning in their minds and what actions might be necessary to keep all those different plates spinning?

Advertisements

Knowledge organisers: fit for purpose?

Definition of a knowledge organiser: Summary of what a student needs to know that must be fitted onto an A4 sheet of paper.

Desk bins: Stuff I Don't Need to Know...
Desk bins: Stuff I Don’t Need to Know…

If you google the term ‘knowledge organisers’ you’ll find a mass of examples. They are on sale on the TES resource site – some sheets of A4 print costing up to £7.50. It seems knowledge organisers have taken off. Teachers up and down the country are beavering away to summarise what needs to be known in their subject area.

It is good news that teachers are starting to think more about curriculum. More discussion of the ‘what’ is being taught, how it should be sequenced and how it can be remembered is long overdue. However, I think there is a significant weakness with some of these documents. I looked at lots of knowledge organisers to prepare for training our curriculum leaders and probably the single biggest weakness I saw was a confusion over purpose.

 

I think there are three very valid purposes for knowledge organisers:

  1. Curriculum mapping – for the TEACHER

Identifying powerful knowledge, planning to build schemas, identifying transferable knowledge and mapping progression in knowledge.

  1. For reference – for the PUPIL

In place of a textbook or a form of summary notes for pupils to reference.

  1. A list of revision items – for the PUPIL (and possibly the parents)

What the teacher has decided ALL pupils need to know as a minimum at the end of the topic.

 

All three purposes can be valid but when I look at the mass of organisers online I suspect there has often been a lack of clarity about the purpose the knowledge organiser is to serve.

Classic confusions of purpose:

  1. Confusing a curriculum mapping document with a reference document:

A teacher sits down and teases out what knowledge seems crucial for a topic. As they engage in this crucial thinking they create a dense document full of references that summarises their ideas. So far so good…but a document that summarises a teacher’s thinking is unlikely to be in the best format for a child to use. The child, given this document, sees what looks like a mass of information in tiny text, crammed onto one sheet of A4. They have no real notion of which bits to learn, how to prioritise the importance of all that detail or apply it. This knowledge is self-evident to the teacher but not the child.

  1. Confusing a knowledge organiser with a textbook:

Teachers who have written textbooks tell me that there is a painstaking editorial process to ensure quality. Despite this there is a cottage industry of teachers writing series of knowledge organisers which amount to their own textbooks. Sometimes this is unavoidable. Some textbooks are poor and some topics aren’t covered in the textbooks available. Perhaps sometimes the desperate and continual begging of teachers that their school should prioritise the purchase of textbooks falls on deaf ears and teachers have no choice but to spend every evening creating their own textbooks photocopied on A4 paper…

…but perhaps we all sometimes need to remind ourselves that there is no virtue in reinventing the wheel.

  1. Confusing a textbook with summary notes:

The information included on an A4 sheet of paper necessarily lacks the explanatory context contained in a textbook or detailed notes. If such summaries are used in place of a textbook or detailed notes the student will lack the explanation they need to make sense of the detail.

  1. Confusing a reference document or notes with a list of revision items for a test

If we want all pupils to acquire mastery of some basics we can list these basic facts we have identified as threshold knowledge in a knowledge organiser. We can then check that the whole class know these facts using a test. The test requires the act of recall which also strengthens the memory of these details in our pupils’ minds.

Often, however, pupils are given reference documents to learn. In this situation the details will be too extensive to be learnt for one test. It is not possible to expect the whole class to know everything listed and so the teacher cannot ensure that all pupils have mastered some identified ‘threshold’ facts. Weaker students will be very poor at recognising what are the most important details they should focus on learning, poor at realising what is likely to come up in a test and the format in which it will be asked. Many will also find a longer reference document contains an overwhelming amount of detail and give up. The chance to build self-efficacy and thus self-esteem has been lost.

 

If you are developing knowledge organisers to facilitate factual testing then your focus is on Purpose C – creating a list of revision items. Below is a list of criteria I think are worth considering:

  1. Purpose (to facilitate mastery testing of a list of revision items)
  • Exclude knowledge present for the benefit of teacher
  • Exclude explanatory detail which should be in notes or a textbook.
  1. Amount
  • A short topic’s worth (e.g. two weeks teaching at GCSE)
  • An amount that all in the class can learn
  • Careful of expectations that are too low and if necessary ramp up demand once habit in place.
  1. Threshold or most ‘powerful’ knowledge
  • Which knowledge is necessary for the topic?
  • Which knowledge is ‘collectively sufficient’ for the topic?
  • Which knowledge will allow future learning of subsequent topics?
  • Which knowledge will best prompt retrieval of chunks of explanatory detail?
  • CUT any extraneous detail (even if it looks pretty)
  • Include relevant definitions, brief lists of factors/reasons arguments, quotes, diagrams and summaries etc.
  • Check accuracy (especially when adapting internet finds)
  1. Necessary prior knowledge
  • Does knowledge included in the organiser presume grasp of other material unlikely to yet be mastered?
  1. Concise wording
  • Is knowledge phrased in the way you wish it to be learned?

Happy knowledge organising!

 

That’s the easy bit

 

A while ago I noticed that my daughter seemed to be talking lots about the geography she had learnt. I was pleased about that but any history teacher will appreciate my chagrin that, by comparison, she barely mentioned her history. Oddly when I asked her about her history lessons she was quite enthusiastic. She was having lots of fun in class but when pushed she mentioned the games she was playing not the history she was learning.

When I trained to teach 22 years ago I thought that fun activities were the top priority and I always planned creative and somewhat quirky activities for my students. Each series of lessons would culminate in a highly motivating activity to build deeper understanding. Why set some boring questions on 19thC factory conditions when your students can write a TV script by an investigative journalist uncovering the horrors? Why write an essay on the significance of Boulton and Watt when you can set a role play in which students take the part of each entrepreneur and debate with each other? I would roll out an ongoing feast of fun for my students. My first job was (I felt unfortunately) in an independent school where teaching was generally quite traditional. If I am honest I felt my KS3 teaching was superior to my colleagues due to my clever activities and was also more motivating for the students.

Despite being convinced at that time of the superiority of my focus, over the years I have actually gradually drifted away from lesson planning that focuses on imaginative tasks. You might think that this is because I have grown lazy but I’m unconvinced. It isn’t so hard to think up an imaginative tasks.white queen Having taught for donkeys years I find that five minutes of ruminative pen nibbling is usually enough to come up with something and there are plenty of sites out there full of ideas if not. To misquote the White Queen, I feel like I could probably think of six imaginative tasks before breakfast. So why has my focus shifted?

In part I don’t do so many creative activities because they take lots of lesson time to complete. I’ve been teaching for the last fourteen years in a 13+ school and there isn’t time to fit in as many of these activities when teaching GCSE and A level. This should give pause for thought. At GCSE I instinctively didn’t seem to think these creative activities were the best use of valuable learning time – interesting.

In fact I’d say that the more focused I became on the quality of the historical knowledge and understanding of my students the less I used these activities. There was a one clear turning point for me which was the moment I realised that while such activities could be good consolidation tasks they were generally quite a poor means of building knowledge and understanding. I realised I had been buoyed along by the third of students in class that produced clever or amusing or insightful responses to my tasks and rather glazed over the more lacklustre responses by the majority. Those students who already had a decent grasp of the subject matter and the issues were able to demonstrate that grasp in the creative task set. It was a time consuming form of consolidation for them but often fun – fair enough. The rest either focused on perfecting the form of the activity (e.g. if I set a TV style investigation there was lots of doorstep interview conflict portrayed…) but failed to use the medium to explore the historical issues or, as happened too often, failed to grasp both the medium AND the necessary historical detail.

So the reason I was doing fewer of these tasks, even though I had not consciously articulated it in my mind, was because they bought motivation but at a price in terms of time and distraction from the historical learning intention. If my purpose as a history teacher was to build historical knowledge and understanding such tasks tended to only really showcase that grasp when it was already present. Whether I chose a radio interview or a diamond nine, a debate or even an essay as my consolidation task I was still no further towards my goal of building a really strong grasp of the history necessary to perform well in that task.

I realised that the final task itself is the easy bit. It is the teaching that goes before that makes it all possible (or means it will flop).

No matter how motivational the activity the challenge remains. How do I help my students gain a broad and deep understanding of the period we are covering? How do I ensure my students remember what they have learnt? When I teach Weimar Germany at GCSE my biggest effort is not put into devising a game of Weimar Monopoly because that is unlikely to help me do the really difficult bit of teaching this topic which is the careful sequencing of ideas and concepts that I have identified through my planning as crucial for understanding. I need to identify which concepts to explain and how to build on what the class already know. I must choose and find ways to emphasise specific content and causal connections. To really understand Germany at this time the story starts with the Kaiser and prewar Germany and with this come dictatorship, revolution, democracy and communism. Then onto proportional representation, left and right wing and constitutions as we learn about the problems of the fledgling republic. We discuss how a state has power and why our country doesn’t have the same problems. I am the brick layer carefully placing every new slab of understanding with deliberate intent and care. I also need to find time to read more myself so that my teaching is rich and insightful and includes fascinating details so my class are motivated by what is intrinsically interesting about the period not just, as with my daughter, the fact that they played a game in class.

As professionals we have to make teaching decisions everyday balancing motivation against efficacy and we all sometimes make pragmatic decisions to include activities even when they are not the most efficient method of grasping the necessary details. We know, however, that our goal as teachers isn’t just to show our classes a good time. By comparison with the challenge of building genuine understanding of the historical period, choosing the specific activity is the easy bit.

One approach to regular, low stakes and short factual tests.

I find the way in which the Quizlet app has taken off fascinating. Millions (or billions?) has been pumped into ed tech but Quizlet did not take off because education technology companies marketed it to schools. Pupils and teachers had to ‘discover’ Quizlet. They appreciated it’s usefulness for that most basic purpose of education – learning. The growth of Quizlet was ‘bottom up’ while schools continue to have technological solutions looking for problems thrust upon them from above. What an indictment of the ed tech industry.

There has been a recent growth of interest in methods of ensuring students learn long term the content they have been taught. This is in part due to the influence of research in cognitive psychology but also due to some influential education bloggers such as Joe Kirby and the changing educational climate caused by a shift away from modular examinations. Wouldn’t it be wonderful if innovation in technology focused on finding simple solutions to actual problems (like Quizlet) instead of chasing Sugata Mitra’s unicorn of revolutionising learning?

In the meantime we must look for useful ways to ensure students learn key information without the help of the ed tech industry. I was very impressed by the ideas Steve Mastin shared at the Historical Association conference yesterday but I realised I had never blogged about my own approach and its pros and cons compared with others I have come across.

I developed a system of regular testing for our history and politics department about four years ago. I didn’t know about the research from cognitive psychology back then and instead used what I had learnt from using Direct Instruction programmes with my primary aged children.

Key features of this approach to regular factual testing at GCSE and A level:

  • Approximately once a fortnight a class is given a learning homework, probably at the end of a topic or sub topic.
  • All children are given a guidance sheet that lists exactly what areas will come up in the test and need to be learnt. Often textbook page references are provided so key material can be easily located.

AAAAA Test

  • The items chosen for the test reflect the test writer’s judgement of what constitute the very key facts that could provide a minimum framework of knowledge for that topic (n.b. the students are familiar with the format and know how much material will be sufficient for an ‘explain’ question.) The way knowledge has been presented in notes or textbook can make it easier or more difficult for the students to find relevant material to learn. In the example above the textbook very conveniently summarises all they need to know.
  • The test normally takes about 10-15 minutes of a lesson. The test is always out of 20 and the pass mark is high, always 14/20. Any students who fail the test have to resit it in their own time. We give rewards for full marks in the test. The test writer must try and ensure that the test represents a reasonable amount to ask all students to learn for homework or the system won’t work.
  • There is no time limit for the test. I just take them in when all are finished.

I haven’t developed ‘knowledge organisers’, even though I can see the advantages of them because I don’t want to limit test items to the amount of material that can be fitted onto one sheet of paper. Additionally, I’ve always felt a bit nervous about sending the message that there is something comprehensive about the material selected for testing. I’ve found my approach has some advantages and disadvantages.

Advantages of this approach to testing:

  • It is regular enough that tests never have to cover too much material and become daunting.
  • I can set a test that I can reasonably expect all students in the class to pass if they do their homework.
  • The regularity allows a familiar routine to develop. The students adjust to the routine quickly and they quite like it.
  • The guidance sheet works better than simply telling students which facts to learn. This is because they must go back to their notes or textbook and find the information which provides a form of review and requires some active thought about the topic.
  • The guidance sheet works when it is clear enough to ensure all students can find the information but some thought is still necessary to locate the key points.
  • Test questions often ask students to use information in the way they will need to use it in extended writing. For example I won’t just ask questions like “When did Hitler come to power”. I will also ask questions like “Give two reasons why Hitler ordered the Night of the Long Knives”.
  • Always making the test out of 20 allows students to try and beat their last total. The predictability of the pass mark also leads to acceptance of it.
  • Initially we get lots of retakers but the numbers very quickly dwindle as the students realise the inevitability of the consequence of the failure to do their homework.
  • The insistence on retaking any failed tests means all students really do end up having to learn a framework of key knowledge.
  • I’ve found that ensuring all students learn a minimum framework of knowledge before moving on has made it easier to teach each subsequent topic. There is a lovely sense of steadily accumulating knowledge and understanding. I also seem to be getting through the course material faster despite the time taken for testing.

Disadvantages of my approach to testing:

  • It can only work in a school with a culture of setting regular homework that is generally completed.
  • Teachers have to mark the tests because the responses are not simple factual answers. I think this is a price worth paying for a wider range of useful test items but I can see that this becomes more challenging depending on workload.
  • There is no neat and simple knowledge organiser listing key facts.
  • We’re fallible. Sometimes guidance isn’t as clear as intended and you need to ensure test materials really are refined for next year and problems that arise are not just forgotten.
  • If you’re not strict about your marking your class will gradually learn less and less for each point on the guidance sheet.
  • This system does not have a built in mechanism for reviewing old test material in a systematic way.

We have just not really found that lower ability students (within an ability range of A*-D) have struggled. I know that other schools using similar testing with wider ability ranges have not encountered significant problems either. Sometimes students tell us that they find it hard to learn the material. A few do struggle to develop the self discipline necessary to settle down to some learning but we haven’t had a student who is incapable when they devote a reasonable amount of time. Given that those complaining are usually just making an excuse for failure to do their homework I generally respond that if they can’t learn the material for one tiny test how on earth are they proposing to learn a whole GCSE? I check that anyone that fails a test is revising efficiently but after a few retakes it transpires that they don’t, after all, have significant difficulties learning the material. Many students who are weak on paper like the tests.

We also set regular tests of chronology. At least once a week my class will put events printed onto cards into chronological order and every now and then I give them a test like the one below after a homework or two to learn the events. I don’t have to mark these myself – which is rather an advantage!

AAAA Test Photo

 

I very much liked Steve Mastin’s approach of giving multiple choice tests periodically which review old material. Good multiple choice questions can be really useful but are very hard to write. Which brings me back to my first point. Come on education technology industry! How about dropping the development of impractical, rather time consuming and gimmicky apps. We need those with funding and expertise to work in conjunction with curriculum subject experts to develop genuinely useful and subject specific forms of assessment.  It must be possible develop products that can really help us assess and track success learning the key information children need to know in each subject.

We’ve Taken Out The Glue.

A post on teaching cause and effect in history

I believed at least some of my history GCSE students when assured me they really did revise for their mock exams. However, the ‘splurge’ some deposited on the page didn’t look much like the careful explanations of cause and effect they had been taught to write. They also totally bombed the simple chronology question for which they just have to put five events in order.

But why wasn’t their revision paying dividends? I had already introduced regular factual tests and was happy that my classes were remembering more of what they learnt and I could see the benefit of this in their ongoing assimilation of the events and better informed written work. Therefore last year I tried to solve the problems presented by the basic chronology question. I asked my class to learn the key events for their topic in order, for a homework. Then, to stop them forgetting, I asked them to practise putting the events (written on cards) in order as a starter activity once or twice a week and continued every now and then even when we had moved onto new topics. Most of my current year 11 class, reaching the end of our study of China 1911-1989, can put about 30 event cards into a pretty accurate order. Their grasp of chronology clearly showed through in the mock exam results.

However, by this time I had realised that the failure with the chronology question was actually just a symptom of a deeper problem. This realisation dawned when I tried to get my classes to see that they could work out the order of events by thinking about the logic of the story.

“Look, the Kapp Putsch must come after the Treaty of Versailles because it was a reason right-wingers staged the coup in the first place”.

Each time I’d say something like this I got that feeling my class heard the words but not my meaning. This was perplexing as I knew that I had never learnt the chronology of the events using cards, it was the logic of the story that allowed me to get the events in order. So why didn’t that work for my students?

I gave my class a flow diagram of events. Their task was to explain the link between each event (the logic of the story). My goodness they hated this task (I wrote about it here) and it became quite apparent that (despite my best teaching efforts) my students had learnt the events as isolated incidents. This explained the problems some students had with the mock exam. Telling them they needed to ‘learn the technique’ to do better next time rather misses the point. Many wrote about the events, (not the causes or effects of the events) because they hadn’t revised the causes or effects and couldn’t work them out. This seems like an interesting example of the way the knowledge of novice learners is ‘inflexible’ as explained by the cognitive psychologist Daniel Willingham.

“When new material is first learned, the mind is biased to remember things in concrete forms that are difficult to apply to new situations. This bias seems best overcome by the accumulation of a greater store of related knowledge, facts, and examples.”

That I could appreciate the ‘logic of the story’ shows I have accumulated a greater store of related knowledge, facts and examples than my students. It is fascinating that a simple chronology question can so effectively expose a more fundamental issue with the grasp of a complex web of events. Ironically, because we carefully teach and students dutifully learn the causes of the events that are most likely to appear in the exam, you can’t necessarily tell how well they understand the flow of events using the exam’s ‘cause’ questions. One solution often used is to identify the biggest events and take the time necessary, perhaps using card sorts, diamond nines (or whatever else occurs to you) to teach for a more complex understanding of their causes. However, for our IGCSE paper you can be asked the causes or effects of many events and each one can’t get ‘the full treatment’. Also we presume that by teaching ‘causes’ of key events the child must automatically be making connections back to the relevant previous events. In fact, as Willingham predicts my students could spend a whole lesson learning the causes of the Kapp Putsch (including the Treaty of Versailles) and fail to mention the Putsch when subsequently asked to list effects of the Versailles Treaty.

I had a revelatory moment recently. I had really pushed my year 10 class (studying Weimar Germany to 1923) to explain back to me how each key event so far could link to one previously. On the spur of the moment I decided to set a homework in which the class just ‘told the story’ of 1918-1923. They had to use all the events listed and each event had to be linked to at least one previous event. I was chuffed with the product of their (and my) efforts. My previous initiatives had ensured my classes tended to remember previous events but across the ability range there was now something more. My focus on ‘the logic of the story’ had led to better grasp of the causal web of these particular events.

This made me realise something more fundamental that seems problematic with the way we teach history. Because GCSE wants our students ‘to analyse’ events we teach them pre-packaged analysis. I now wonder how I could ever have thought that learning causes or effects was intellectually superior to learning to describe the events themselves. Further, I now wonder if by de-emphasising the ‘story’ of history in favour of teaching analysis (cause and effect etc) we have taken out ‘the glue’ that holds events together and actively hindered childrens’ ability to move beyond their tendency to remember the events in isolation.

Do we get things back to front? We tell our classes that real history involves giving reasons for our arguments when in fact our arguments emerge from our complex grasp of ‘the story’. We tell our classes that a ‘skill’ of history is to come up with ‘links’ between the events when it is because we know ‘the story’ so well that we can see the links.

History shouldn’t be ‘one damn thing after another’ and I think telling the story is the way to avoid that.

 

Does our teaching look a bit like this?

http://wp.me/a4lRxH-ds

 

We need to talk about ‘transfer’

I am a history teacher but am I fulfilling my role as a teacher if children walk out of my lessons simply knowing more history than when they walked in? Should my goals be broader? The influential educationalist Guy Claxton denies that my goals should centre on teaching history at all:

‘Knowing the Kings and Queens of England…are not top priority life skills. Their claim for inclusion in the core curriculum rests on the extent to which they provide engaging and effective ‘exercise machines’ for stretching and strengthening generalizable habits of mind’

In education today it is rare that the content taught is justified as worth knowing just for its own sake (although I have tried). As Claxton illustrates, it is so often a ‘means to an end’.

  • Learn history to develop analytical skills
  • Learn maths to improve problem solving skills
  • Play sports to learn to work as a team
  • Do brain training programmes to improve your memory
  • Use playdough in Reception to improve writing muscles
  • Set story writing to make children more creative
  • Learn chess to improve critical thinking skills

In each case we are making an enormous leap. How do we KNOW that the skill or trait acquired in one area will ‘transfer’ to other areas; that it will generalise? I might encourage my daughters to show ‘love’ towards each other but it would be farcical to presume this would help them ‘love’ studying geography at school.

My gut feeling is that playing sport is a ‘good thing’. However, I’m often astonished to hear of the skills displayed by a child on the sports pitch that I see no sign of in the classroom. Ability to work as part of a team learnt in sport does not seem to mean children will play their part in class group work. Maybe some skills and traits just need time to sink in. When my son was three his nursery started teaching the class half an hour of yoga a week because apparently yoga improves concentration. No one seemed to question the likelihood of such a brief exposure being efficacious, let alone whether transfer to other contexts was ever likely.

Once we accept the very obvious point that there are limits on how far a skill or trait we teach will actually ‘transfer’ between contexts we must concede that we can’t just presume such transfer will happen.

Even when the two contexts are close such as applying your knowledge of essay writing in one subject to another we still see transfer problems. I asked my year 13 class the other day whether what they had learnt about essay writing in English helped them write history essays. No, they replied, the two essay types are just SO different. I was at the time attempting to show them that the structure of their two history coursework essays was basically the same. They struggled to see even that similarity which was so glaringly obvious to me.

It is clearly incorrect to state that skills or traits don’t ever transfer to different contexts. However, they don’t necessarily transfer as READILY as we like to presume and it depends on:

  • Whether the skill/trait means the same thing in different contexts. I might use the word ‘analysis’ to describe what I do in essay writing and chess but maybe the similarity is only word deep.
  • How CLOSE or similar the two contexts are. For example I presume an accomplished horse rider might use their skills to learn bareback riding quite quickly, to ride a camel quicker than the average but might not be much quicker to learn to ride a surf board!

There is excellent and enormously extensive research on transfer. Take critical thinking, we know that beyond similar or analogous circumstances reasoning principles are not transferred. We also know that you need expertise to recognise the similar features of superficially different problems which explains the inability of my class to recognise the similarity of essay structures. There is a superb summary of the research here that is very well worth a read.

Despite there being such useful research and the obviousness of fact that transfer can’t be presumed when do you EVER hear a discussion of the likelihood of transferability when debating the worth of an educational initiative? We must stop presuming that just because we teach something in one context our pupils will apply it in another. If we don’t want to simply waste valuable teaching time we just must start talking about transfer. We must question whether it is likely. We must discuss what we can do to make it more possible.

We really, really need to start talking about transfer.

Reading failure? What reading failure?

“Yes, A level history is all about READING!”

I say it brightly as I dole out extracts from a towering pile of photocopying taken from different texts that will help the class get going with their coursework. I try and ooze reassurance. I cheerily talk about the sense of achievement my students will feel when they have worked their way through these carefully selected texts, chosen to transfer the maximum knowledge in the minimum reading time. I explain this sort of reading is what university study will be all about, while dropping in comforting anecdotes to illustrate it is much more manageable than they think. I make this effort because I NEED them to read lots. The quality of their historical thinking and thus their coursework is utterly dependent upon it.

Who am I kidding? This wad of material is the north face of the Eiger to most of my students. Some have just never read much and haven’t built up the stamina. The vocabulary in those texts (chosen by their teacher for their readability) is challenging and the process will be effortful. For a significant minority in EVERY class the challenge is greater. They don’t read well. Unfamiliar words can’t be guessed and their ability to decode is weak. To read even one of my short texts will take an inordinate time. Such students are bright enough, most students in my class will get an A after all, with some Bs and the odd C. They all read well enough to get through GCSE with good results and not one of them would have been counted in government measures for weak literacy. According to the statistics the biggest problem I face day in, day out as I teach A level history simply doesn’t exist. Believe me it exists and there is a real human cost to this hidden reading failure.

Take Hannah. She loves history, watches documentaries and beams with pleasure as we discuss Elizabeth I. She even reads historical novels. However, she really struggles to read at any pace and unfamiliar words are a brick wall. She briefly considered studying history at university but the reading demands make it impracticable. Her favourite subject can never be her degree choice because her reading is just not good enough. She is not unusual, her story is everywhere.

At this point I am going to hand over my explanation to Kerry Hempenstall, senior lecturer in psychology at RMIT. I include just a few edited highlights from his survey of the VAST research literature on older students’ literacy problems that you can consider for yourself by following the link. He says:

These struggling adolescents readers generally belong to one of two categories, those provided with little or poor early reading instruction or those possibly provided with good early reading instruction, yet for unknown reasons were unable to acquire reading skills (Roberts, Torgesen, Boardman, & Sammacca, 2008)…

Hempenstall outlines the problems with the ways reading is currently taught:

…Under the meaning centred approach to reading development, there is no systematic attention to ensuring children develop the alphabetic principle. Decoding is viewed as only one of several means of ascertaining the identity of a word – and it is denigrated as being the least effective identification method (behind contextual cues). In the early school years, books usually employ highly predictable language and usually offer pictures to aid word identification. This combination can provide an appearance of early literacy progress. The hope in this approach is that this form of multi-cue reading will beget skilled reading.

However, the problem of decoding unfamiliar words is merely postponed by such attractive crutches. It is anticipated in the meaning centred approach that a self-directed attention to word similarities will provide a generative strategy for these students. However, such expectations are all too frequently dashed – for many at-risk children progress comes to an abrupt halt around Year 3 or 4 when an overwhelming number of unfamiliar (in written form) words are rapidly introduced…

  1. a) New content-area vocabulary words do not pre-exist in their listening vocabularies. They can guess ‘wagon’. But they can’t guess’ circumnavigation’ or ‘chlorophyll’ based on context (semantics, syntax, or schema); these words are not in their listening vocabularies.
  2. b) When all of the words readers never learned to decode in grades one to four are added to all the textbook vocabulary words that don’t pre-exist in readers’ listening vocabularies, the percentage of unknown words teeters over the brink; the text now contains so many unknown words that there’s no way to get the sense of the sentence.
  3. c) Text becomes more syntactically embedded, and comprehension disintegrates. Simple English sentences can be stuffed full of prepositional phrases, dependent clauses, and compoundings. Eventually, there’s so much language woven into a sentence that readers lose meaning. When syntactically embedded sentences crop up in science and social studies texts, many can’t comprehend.” (Greene, J.F. 1998)

…In a study of 3000 Australian students, 30% of 9 year olds still hadn’t mastered letter sounds, arguably the most basic phonic skill. A similar proportion of children entering high school continue to display confusion between names and sounds. Over 72% of children entering high school were unable to read phonetically regular 3 and 4 syllabic words. Contrast with official figures: In 2001 the Australian public was assured that ‘only’ about 19% of grade 3 (age 9) children failed to meet the national standards. (Harrison, B. 2002) [Follow the link if you want to read all the research listed.]

Hempenstall outlines the research showing that the effects of weak reading become magnified with time:

“Stanovich (1986) uses the label Matthew Effects (after the Gospel according to St. Matthew) to describe how, in reading, the rich get richer and the poor get poorer. Children with a good understanding of how words are composed of sounds (phonemic awareness) are well placed to make sense of our alphabetic system. Their rapid development of spelling-to-sound correspondences allows the development of independent reading, high levels of practice, and the subsequent fluency which is critical for comprehension and enjoyment of reading. There is evidence (Stanovich, 1988) that vocabulary development from about Year 3 is largely a function of volume of reading. Nagy and Anderson (1984) estimate that, in school, struggling readers may read around 100,000 words per year while for keen mid-primary students the figure may be closer to 10,000,000, that is, a 100 fold difference. For out of school reading, Fielding, Wilson and Anderson (1986) suggested a similar ratio in indicating that children at the 10th percentile of reading ability in their Year 5 sample read about 50,000 words per year out of school, while those at the 90th percentile read about 4,500,000 words per year”…

Hempenstall explains just why it is crucial to spot problems with phonics in year 1:

The probability that a child who was initially a poor reader in first grade would be classified as a poor reader in the fourth grade was a depressingly high +0.88.Juel, C. (1988

If children have not grasped the basics of reading and writing, listening and speaking by Year Three, they will probably be disadvantaged for the rest of their lives. Australian Government House of Representatives Enquiry. (1993).The Literacy Challenge. Canberra: Australian Printing Office.

“Unless these children receive the appropriate instruction, over 70 percent of the children entering first grade who are at risk for reading failure will continue to have reading problems into adulthood”. Lyon, G.R. (2001).

[The research literature for this finding is enormous – do follow link if interested]

A study by Schiffman provides support for monitoring programs for reading disabilities in the first and second grades. In a large scale study of reading disabilities (n = 10,000),

82% of those diagnosed in Grades 1 or 2 were brought up to grade level.

46%     in Grade 3 were brought up to grade level.

42%     in Grade 4 were brought up to grade level.

10-15% in Grades 5-7 were brought up to grade level.

Berninger, V.W, Thalberg, S.P., DeBruyn, I., & Smith, R. (1987). Preventing reading disabilities by assessing and remediating phonemic skills. School Psychology Review, 16, 554-565.

Hempenstall lists research on what it is that causes such problems for struggling readers:

“The vast majority of school-age struggling readers experience word-level reading difficulties (Fletcher et al., 2002; Torgesen, 2002). This “bottleneck” at the word level is thought to be particularly disruptive because it not only impacts word identification but also other aspects of reading, including fluency and comprehension (LaBerge & Samuels, 1974). According to Torgesen (2002), one of the most important discoveries about reading difficulties over the past 20 years is the relationship found between phonological processing and word-level reading. Most students with reading problems, both those who are diagnosed with dyslexia and those who are characterized as “garden variety” poor readers, have phonological processing difficulties that underlie their word reading problems (Stanovich, 1988)” (p.179). [Do follow link for more]

To debate just how many children are functionally illiterate and condemn Nicky Morgan for apparent exaggeration entirely misses the point. Reading failure is endemic. I would estimate that about a third of my A level students have noticeable issues with word level reading that significantly impact upon their progress in history at A level. Reading failure is one of the biggest obstacles I have face in my teaching and I have every reason to comment on the issue. I don’t even deal with all those students who chose not to even attempt A level history because they knew it meant lots of reading.  At secondary school we should be giving students more complex texts to build their vocabularies and reading stamina. However, the research is pretty clear about when difficulties need to be identified if children are to overcome them – way back in year 1. The research is also pretty clear about what it is that struggling readers lack – a grasp of the alphabetic principle that they are able to apply fluently when reading. Given this, the opposition to the year 1 phonics check is hard to justify. We know so much now about effective reading instruction but it can only be used to help children if teachers are willing to adjust their practices. While around 90% of primary schools continue to focus on ‘mixed methods’ (guessing from cues rather than sounding out) that limit children’s chances of acquiring the alphabetic principle essential for successful reading, nothing will change.

Is reliability becoming the enemy of validity?

What would happen if I, as a history graduate, set out to write a mark scheme for a physics GCSE question? I dropped physics after year 9 but I think it is possible I could devise some instructions to markers that would ensure they all came up with the same marks for a given answer. In other words my mark scheme could deliver a RELIABLE outcome. However, what would my enormously experienced physics teacher husband think of my mark scheme? I think he’d either die of apoplexy or from laughing so hard:

“Heather, why on earth should they automatically get 2 marks just because they mentioned the sun? You’ve allowed full marks for students using the word gravity…”

After all I haven’t a notion how to effectively discriminate between different levels of understanding of physics concepts. My mark scheme might be reliable but it would not deliver a valid judgement of the students’ understanding of physics.

A few weeks ago at ResearchEd the fantastically informed  Amanda Spielman gave a talk on research Ofqual has done into the reliability of exam marking. Their research and that of Cambridge Assessment suggest marking is more RELIABLE than has been assumed by teachers. This might surprise teachers familiar with this every summer:

It is late August. A level results are out and the endless email discussions begin:

Hi Heather, Jake has emailed me. He doesn’t understand how he got an E on Politics A2 Unit 4 when he revised just as much as for Unit 3 (in which he got a B). I advised him to get a remark. Best, Alan

Dear Mrs F, My remark has come back unchanged and it means I’ve missed my Uni offer. I just don’t understand how I could have got an E. I worked so hard and had been getting As in my essays by the end. Would you look at my exam paper if I order it and see what you think? Thanks, Jake

Hi Alan, I’ve looked Jake’s paper. I though he must have fluffed an answer but all five answers have been given D/E level marks. I just don’t get it. He’s written what we taught him. Maybe the answers aren’t quite B standard – but E? Last year this unit was our best paper and this year it is a car crash. I’ll ask Clarissa if we can order her paper as she got full uniform marks. It might give some insight.  Heather

Alan, I’ve looked at Clarissa’s paper. See what you think. It is a great answer. She learns like a machine and has reproduced the past mark scheme. Jake has made lots of valid points but not necessarily the ones in the mark scheme. Arguably they are key, but then again, you could just as easily argue other points are as important and how can such decent answers end up as E grade even if they don’t hit all mark scheme bullet points precisely? I just despair. How can we continue to deliver this course with conviction when we have no idea what will happen in the exam each year? Heather

I don’t like to blow my own trumpet but the surprisingly low marks on our A2 students’ politics papers was an aberration from what was a fantastic results day for our department this year:

Hi Heather, OMG those AS history results are amazing!!!! Patrick an A!!!! I worried Susie would get a C and she scored 93/100, where did that come from? Trish

I don’t tend to quibble when the results day lottery goes our way but I can admit that it is part of the same problem. Marking of subjects such as history and politics will always be less reliable than in maths and we must remember it is the overall A level score (not the swings between individual module results) that needs to be reliable. But… even so… there seems to be enormous volatility in our exam system. The following are seen in my department every year:

  1. Papers where the results have a very surprising (wrong) rank order. Weak students score high As while numerous students who have only ever written informed, insightful and intelligent prose have D grades.
  2. Students with massive swings in grades between papers (e.g. B on one and E on the other) despite both papers being taught by the same teacher and with the same general demands.
  3. Exam scripts where it is unclear to the teacher why a remark didn’t lead to a significant change in the result for a candidate.
  4. Quite noticeable differences in the degree of volatility over the years in results depending on paper, subject (history or politics in my case) and even exam board.

Cambridge Assessment have been looking into this volatility and suggested that different markers ARE coming up with similar enough marks for the same scripts – marking is reliable enough. However, it is then assumed by the report writers that all other variation must be at school/student level. There is no doubt that there are a multitude of school and student level factors that might explain volatility in results such as different teachers  covering a course, variations in teaching focus or simply that a student had a bad day. However, why was no thought given to whether lack of validity explains volatility in exam results?

For example, I have noticed a trend in our own results at GCSE and A level. The papers with quite flexible mark schemes, with more reliance on marker expertise, deliver more consistent outcomes closer to our own expectations of the students. It looks like attempts to make our politics A level papers more reliable have simply narrowed the range of possible responses that get reward limiting the ability of the assessment to discriminate effectively between student responses. Organisations such as HMC know there is a problem but perhaps overemphasise the impact of inexperienced markers.

The mounting pressure on exam boards from schools has driven them to make their marking ever more reliable but this actually leads to increases in unexpected grade variation and produces greater injustices as the assessment becomes worse at discriminating between candidates. This process is exacerbated by the loss of face to face standardisation meetings (and in subjects such as politics markers unused to teaching the material) and thus markers are ever more dependent and/or tied to the mark scheme in front of them to guide their decision making. If students regularly have three grades difference between modules perhaps the exam board should stop blathering on about the reliability of their systems and start thinking about the validity of their assessment.

The drive for reliability can too often be directly at the expense of validity.

It is a dangerously faulty assumption that if marking is reliable then valid inferences can be drawn from the results. We know that for some time the education establishment has been rather blasé about the validity of its assessments.

  • Apparently our country’s school children have been marching fairly uniformly up through National Curriculum levels, even though we know learning is not actually linear or uniform. It seems that whatever the levels presumed to measure it was not giving a valid snapshot of progress.
  • I’ve written about how history GCSE mark schemes assume a spurious progression in (non-existent) generic analytical skills.
  • Too often levels of response mark schemes are devised by individuals with little consideration of validity.
  • Dylan Wiliam points out that reliable assessment of problem solving often requires extensive rubrics  which must define a ‘correct’ method’ of solving the problem.
  • EYFS assesses progress in characteristics such as resilience when we don’t even know if it can be taught and critical thinking and creativity when these are not constructs that can be generically assessed.

My experience at A level is just one indication of this bigger problem of inattention to validity of assessments in our education system.

Alphabet Soup – a post about A level marking.

Cambridge Assessment tell us that ‘more than half of teachers’ grade predictions are wrong’ at A level. There is an implication behind this headline that many teachers don’t know their students and perhaps have to be ‘put right’ by the cold hard reality of exam results.

What Cambridge Assessment neglect to say is that greater accuracy in predicting A level module results would actually need prophetic powers. No teacher, however good, can predict module results with any reliability because many results are VERY unpredictable. I teach history and politics, humanities subjects and so likely to have the less consistent results than subjects like sciences. Our history results have been relatively predictable of late but our A level politics results are another matter.

To illustrate we will now play a game. Your job is to look at a selection of real data from this year and predict each student’s results using their performance in previous modules as a guide. You might think that perhaps your lack of knowledge of each student’s ability will be a hindrance to your ability to make accurate predictions. Hmm well, we’ll see! (If you are a party pooper and don’t want to play you can scroll forward to the last table and see all the answers…)

It may help you to know that in politics the two AS and the two A2 modules follow similar formats and are of similar difficulty (perhaps Unit 2 is a little bit harder than Unit 1) and so a teacher will probably predict the same grade for each module as there is no reason why grades will vary markedly between modules that a teacher can anticipate. The A2 modules are a bit harder than AS modules and so a teacher will bear this in mind when predicting A2 results with the AS grades in front of them. (That said students mature and frequently do better at A2 than AS so this isn’t an entirely safe trend to anticipate.)

So using the Unit 1 grades on the first table below, what might you predict for these students’ Unit 2 module? (Remember the teacher will necessarily have given the exam board the same predicted grade for Unit 1 and 2.)

 

Candidate Unit 1 [AS] Unit 2 [AS] Unit 3 [A2] Unit 4 [A2]
1 B
2 B
3 B
4 A
5 B
6 A
7 A
8 B
9 A
10 A
11 A

(The overall results will soon be in the public domain and I have anonymised the students by not including the whole cohort (which was 19) and not listing numerical results (which interestingly would actually make prediction even harder if included). I have not listed results at retake as these would not be available when predictions were made. We use Edexcel.)

Check the table below to see if you were right.

Candidate Unit 1 [AS] Unit 2 [AS] Unit 3 [A2] Unit 4 [A2]
1 B B
2 B C
3 B E
4 A C
5 B E
6 A E
7 A D
8 B B
9 A B
10 A A
11 A A

 

Were you close? Did you get as much as 50%? If you simply predicted the same grade for Unit 2 as scored in Unit 1 (as teachers generally would) you could have only got 7/19 of the full cohort correct.

Now don’t look at the table below until you have tried to predict the first A2 grade! Go on! Just jot down your predictions on the back of an envelope. (Isn’t this fun?)

Candidate Unit 1 [AS] Unit 2 [AS] Unit 3 [A2] Unit 4 [A2]
1 B B D
2 B C E
3 B E B
4 A C E
5 B E D
6 A E D
7 A D E
8 B B C
9 A B D
10 A A A
11 A A C

Rather an unpredictably steep drop there! It was all the more puzzling for us given that our A2 history results (same teachers and many of the same students) were great.

If you’ve got this far why not predict the final module?

 

Candidate Unit 1 [AS] Unit 2 [AS] Unit 3 [A2] Unit 4 [A2]
1 B B D C
2 B C E B
3 B E B E
4 A C E D
5 B E D B
6 A E D D
7 A D E D
8 B B C B
9 A B D D
10 A A A A
11 A A C C

 

We have no idea why there is a steep drop at A2 this year.  Perhaps our department didn’t quite ‘get’ how to prepare our students for these papers in particular. I doubt it – but if so what does this say about our exams if seasoned and successful teachers can fail to see how to prepare students for a particular exam despite much anxious poring over past papers and mark schemes? Our politics AS results were superb this year for both modules.  Our department’s History results were fantastic at AS and A2. These results were entirely UNPREDICTABLE. Or to put it another way, they were only predictable in the sense that we anticipated in advance that they would look like alphabet soup – because they often do.

The first point that should be clear is that no teacher could possibly predict these module results. To even make the statement ‘more than half of teachers’ grade predictions are wrong’ is to wilfully mislead the reader as to what the real issue is here.

In fairness, it might feel like the grades have been created by a random letter generator but the results aren’t quite random. Some very able students, such as candidate 10, get As on all papers and would only ever have had A predictions. The final grades of our students were generally within one grade of expectations so the average of the four results has some validity. This said, surely there is a more worrying headline that the TES should have run with?

Just how good are these exams at reliably discriminating between the quality of different candidates? It is argued that marking is reliable but what does this then tell us about the discriminatory strength of the exams themselves or their mark schemes? It can’t have helped that there were only 23 marks (out of 90) between an A and an E on Unit 3 or 24 marks between those grades on Unit 4. I have discussed other reasons I think may be causing the unpredictability here and here.

Not all exams are as unpredictable as our Government and Politics A level but if we want good exams that reliably and fairly discriminate between students we need to feel confident we know why some exams have such unpredictable outcomes currently. Ofqual has been moving in that direction with their interesting studies of maths GCSE and MFL at A level. As well as being unfair in its implication the Cambridge Assessment headline is simply unhelpful as it obscures the real problems.