Mastery does NOT mean full understanding

“For now we see in a mirror, darkly; but then face to face: now I know in part; but then shall I know fully even as also I was fully known” 1 Corinthians 13 v12.

‘Mastery’ means ‘full understanding’ according to many teachers on twitter. So ‘mastery teaching’ means not moving on with your course until something is fully understood. I don’t think teachers really think this because there is a one insurmountable problem with this definition of mastery – there is no such thing as full understanding. For example:

  • When a KS1 child is first taught ‘place value’ is it conceivable that they can fully understand the notion, with all its implications? Surely many GCSE students could do with understanding place value better than they do?
  • My year 10 history class use the word ‘dictatorship’ with some confidence in their writing suggesting they understand it but sometimes they do use the term incorrectly so do they understand the full implications of the term? I know they don’t because I have a better understanding of the term than they do. Do I understand the full implications of the term dictatorship? I know I don’t because the historian Richard Evans definitely understands it better than me.
  • My eight year old son has started reading Harry Potter books by himself. Does he understand them? Well I don’t suppose he realises (as J K Rowling must have appreciated) that the Hogwarts house elves illustrate the Marxist notion of false consciousness. I don’t even think he gets the same depth of meaning from these books as his thirteen year old sister. So when will he be able to ‘master’ Harry Potter? Should he wait to read them until he is able to gain an appreciation of Marxist theory or just until he is mature enough to understand Harry’s teen romances?

In reality of course teachers, as professionals, don’t hang around waiting for FULL understanding – that would be ridiculous. They actually make sensible decisions about the ‘degree’ of understanding necessary for a child at that stage with the curricular content they are learning. The word, ‘mastery’ can’t tell us a thing about what this sensible degree of understanding might be.

Unfortunately the mistaken notion of ‘full understanding’ is not harmless in practice. It can mean teachers do hang around for too long focusing counter-productively on ever greater understanding. A maths teacher may be convinced that a KS1 child must fully understand place value when the notion has been taught at a basic level. They may introduce word problems to check for mastery or ‘full understanding’ of place value. In their pursuit of ‘full understanding’ they fail to consider:

  1. Ability to use learning in new contexts (like word problems in maths or knowledge in history sourcework or applied GCSE science questions) tends to lag behind initial learning because newly learnt knowledge is what is called ‘inflexible’. To overcome this inflexibility you need to accumulate a greater store of related knowledge, facts and examples.
  2. In the case of reading, holding children back so they can ‘fully understand’ what they read, can mean they lack exposure to the very new words and ideas that will allow greater understanding to develop.
  3. As Willingham explains, knowing more facts makes many cognitive functions such as comprehension and problem solving operate more efficiently. Therefore a focus on memory (really knowing what is taught long term) as well as initial understanding is important. This means better understanding often develops after greater FLUENCY OF KNOWLEDGE has been achieved so, for example, lots of practice gaining confidence and really knowing a mathematical method can open up the possibility of further understanding of related concepts. Knowing more about the causes of World War One will make it more possible to demonstrate understanding in an essay.

I like to think of understanding and fluency of knowledge as the partners in a traditional dance. Sometimes they work in unison:

Netherfield Ball 4 (2)

And sometimes they work apart, one going before the other, like dancers executing moves that do not involve their partner.

netherfield ball 1

This means, dare I say it, sometimes it makes sense to teach knowledge and ensure it is remembered even though it means understanding lags behind. It is the teacher that needs to decide whether greater fluency of knowledge or greater understanding is more necessary at any given point. When making this decision perhaps we should bear in mind that in modern education the trend has been towards overemphasising initial understanding at the expense of necessary fluency of knowledge through ensuring that what is taught has been remembered confidently long term.

Where does this leave the word ‘mastery’? We’ve already established that mastery is not a principle we can use to judge the degree of detail in which students must grasp curricular content. Mastery can, however, describe how well children have grasped or can perform whatever the teacher has considered that they need to know or be able to do at that given point  whether that is fluency of knowledge or understanding. When used in this sense the term mastery is useful. The confusion occurs because teachers think about ‘mastery’ in curricular rather than pedagogical terms:

Curricular decision: What should I teach? I should teach this concept fully

Pedagogical decision: When should I move on? When they understand and have committed to long term memory what I have decided they need to know.

The latter pedagogical goal is a useful way to think about mastery. The former curricular goal is actually impossible (unless, perhaps you are in heaven with God and the angels…)

 

It’s all just a little bit of history repeating

Below are the results of testing in basic skills, over three years, in a group of English primary schools.

“[Of 25 000 children entered for tests] the total rate of failure which two years ago was 13%, rose last year to 14.46%, but declined this year to 11.3%. Of last year’s failures 20% were in numeracy, 7.7% in writing and 6% in reading.”

We need to know more about these tests (of this more later) but the results sound quite good. They were trumpeted by the government as proving the success of government policies. There were, however, some very familiar and very serious problems that those involved in the testing identified. For example:

 “It is found possible by ingenious preparation, to get children through the tests without really knowing how to read write and calculate.”

“[In preparation for these tests] the teacher is led to think…not about teaching their subject but about managing to meet targets. They limit their subject as much as they can and within these limits try to cram their pupils… the ridiculous results obtained by teaching under these conditions can be imagined.”

“[A system of targets has led to] a game of mechanical contrivance in which the teachers will and must more and more learn how to beat us [the test setters].”

“The circle of the children’s reading has thus been narrowed and impoverished all the year for the sake of a result at the end of it, and the result is an illusion.”

“…the more we undertake to lay down to the very letter the requirements which shall be satisified in order to meet targets , the more do managers and teachers [claim reaching these targets equates to successful teaching]”

Harsh words but sadly these observations on the impact of perverse incentives on actual educational standards are only too familiar. Many similar criticisms have been levelled at our modern education accountability systems.

Except that these words weren’t written recently. They were written in an inspection report on primary schooling from 1869. Yes, in 1869 Her Majesty’s Inspector Matthew Arnold put pen to paper and what I have written above (with very few minor alterations) is what he wrote. At that time it was not actually ‘targets’ as such that schools were chasing, instead individual schools were funded depending on their results on some very narrow testing in arithmetic, writing and reading, known as ‘paying by results’. Arnold set out to highlight the damage of this accountability system to the quality of the education the children received.

I’ve been dipping into Arnold’s writing on education this holiday. He is now best known as a great poet but it was his work as an HMI which earnt him his bread. While many education issues and structures he describes were quite different in the second half of the nineteenth century, what is most striking is how far the Victorians were actually grappling with the same problems, engaged in the same debates, attempting the same hotly debated solutions and driven by the same good intentions.  The prescience of some of Arnold’s commentary is at times startling.

I can’t help thinking of the damage done by the 5A*-C metric when I read:

“It is just the weakness of a system which attempts to prescribe exactly the MINIMUM which shall be done, and which makes it highly penal to fall short of the MINIMUM”

And that

“ Admitting the stimulus of the test examination to be salutary, we may yet say that when it is over-employed it has two faults: it tends to make the instruction mechanical and to set a bar to duly extending it [the instruction]… performing a minimum expressly laid down beforehand – must inevitably concentrate the teacher’s attention on the means for producing this minimum…the danger is the mistake of treating these two [the minimum and the good instruction of the school] as if they were identical.”

He seems quite familiar with the resultant problems of grade inflation…

“This is a hard comparison to make with accuracy, so as to be sure that the improvement in questions has actually taken place”

And the paucity of a primary education focused on narrow testing in the basics:

“…government arithmetic will soon be…remarkable chiefly for its meagreness and sterility.”

Arnold wrote a pamphlet critiquing education reforms in 1862 called ‘The Twice-Revised Code’. In this his criticisms of practices in education range more widely. I was particularly amused by his explanation of why he felt it necessary to write a pamphlet for the general reader. It seems he appreciated just how big a challenge Justine Greening will have mastering her brief:

The system of our Education Department bristles with details so numerous, so minute, and so intricate, that any one not practically conversant with this system has great difficulty in mastering them, and, by failing to master them, may easily be led into error.”

And he was just as cross as teachers today when politicians imposed flaky policies – SATs retakes and the planned academisation of all schools spring immediately to my mind:

“Concocted in the recesses of the Privy Council Office, with no advice asked from those practically conversant in schools, no notice given to those who largely support schools, this new scheme…by which they abruptly revolutionize the system…has taken alike their friends and enemies by surprise.”

The following quote made me think of the impact this year of government policies cutting places for university based teacher training:

“But we could wish some better means had been originally devised for accomplishing this limitation, by processes which the training colleges might have accepted, and which would not have abruptly deranged all their operations; by processes which their inventors might not have been, after all, forced to abandon.”

Arnold is particularly derisory of education reform led by political economists, forced to admit they had:

“…pushed their principle too far when they proposed to examine infants under six years of age!”

He reserves particular scorn for the way HMI were forced to look narrowly at a school’s test results when they visited:

“In fact the inspector will just hastily glance around the school, and then he must to work at the ‘logbooks’…as if there might not be in a school most grave matters needing inspection and correction.”

Finally I don’t know whether I am more amused or saddened by discovering clear explanations, written in 1862, as to why policies we still pursue today are doomed to fail. We have been pursuing criterion based marking in our schools for decades. There are now voices such as Daisy Christodoulou’s explaining why level descriptors don’t work. Here is Arnold, in 1862, explaining the obvious problem with descriptions of quality:

“…the terms ‘fair’ and ‘good’, when applied to the reading, writing and arithmetic of our elementary schools, are not always used in precisely the same sense, and do not carry to the minds of all who hear them used, precisely the same impression”

Most tragic in my mind is the way in the modern age we continue to pursue enormously damaging approaches in our efforts to teach good reading. Such approaches were so obviously wrong to a commentator on education in 1862 but still we persist. Arnold explains that to ensure good results in reading (and thus funding) schools began to focus on teaching basic reading to the neglect of teaching a wide ranging knowledge of the world around them. He observes that:

Commissioners themselves quote the case of a school at Greenwich, in which backward readers, kept to reading-lessons only, were found to make less progress even in reading than others equally backward whose lessons were of a more varied cast. The most experienced inspectors, too, declare that the schools in which the general instruction is best are precisely the schools in which the elementary instruction is best also.”

“[It is their progress in studying] civilisation which will bring them nearer to this power (of good reading comprehension), not the confining them to reading-lessons not the striking out of lessons on geography or history.”

The research of a whole field in cognitive psychology has been necessary to persuade many that it really is important to know lots of ‘stuff’ to have good reading comprehension but in education we are so keen find short cuts and justify them by claiming we know a lot better than people who lived long ago. Arnold wonders:

“If [for] good reading, cultivation in other subjects is necessary, why cut of all grants for these subjects in the hope of thereby getting better reading?”

Good question! Why do we focus so much time and priority on English teaching given the importance of learning lots of other subjects for comprehension?

Perhaps if people had studied a little more history they might be less dismissive of the idea that we can learn from the past. Perhaps the reason we often don’t learn from past wisdom is because we rather arrogantly think we must know better.

You can find Arnold’s ideas on education here:

That’s the easy bit

 

A while ago I noticed that my daughter seemed to be talking lots about the geography she had learnt. I was pleased about that but any history teacher will appreciate my chagrin that, by comparison, she barely mentioned her history. Oddly when I asked her about her history lessons she was quite enthusiastic. She was having lots of fun in class but when pushed she mentioned the games she was playing not the history she was learning.

When I trained to teach 22 years ago I thought that fun activities were the top priority and I always planned creative and somewhat quirky activities for my students. Each series of lessons would culminate in a highly motivating activity to build deeper understanding. Why set some boring questions on 19thC factory conditions when your students can write a TV script by an investigative journalist uncovering the horrors? Why write an essay on the significance of Boulton and Watt when you can set a role play in which students take the part of each entrepreneur and debate with each other? I would roll out an ongoing feast of fun for my students. My first job was (I felt unfortunately) in an independent school where teaching was generally quite traditional. If I am honest I felt my KS3 teaching was superior to my colleagues due to my clever activities and was also more motivating for the students.

Despite being convinced at that time of the superiority of my focus, over the years I have actually gradually drifted away from lesson planning that focuses on imaginative tasks. You might think that this is because I have grown lazy but I’m unconvinced. It isn’t so hard to think up an imaginative tasks.white queen Having taught for donkeys years I find that five minutes of ruminative pen nibbling is usually enough to come up with something and there are plenty of sites out there full of ideas if not. To misquote the White Queen, I feel like I could probably think of six imaginative tasks before breakfast. So why has my focus shifted?

In part I don’t do so many creative activities because they take lots of lesson time to complete. I’ve been teaching for the last fourteen years in a 13+ school and there isn’t time to fit in as many of these activities when teaching GCSE and A level. This should give pause for thought. At GCSE I instinctively didn’t seem to think these creative activities were the best use of valuable learning time – interesting.

In fact I’d say that the more focused I became on the quality of the historical knowledge and understanding of my students the less I used these activities. There was a one clear turning point for me which was the moment I realised that while such activities could be good consolidation tasks they were generally quite a poor means of building knowledge and understanding. I realised I had been buoyed along by the third of students in class that produced clever or amusing or insightful responses to my tasks and rather glazed over the more lacklustre responses by the majority. Those students who already had a decent grasp of the subject matter and the issues were able to demonstrate that grasp in the creative task set. It was a time consuming form of consolidation for them but often fun – fair enough. The rest either focused on perfecting the form of the activity (e.g. if I set a TV style investigation there was lots of doorstep interview conflict portrayed…) but failed to use the medium to explore the historical issues or, as happened too often, failed to grasp both the medium AND the necessary historical detail.

So the reason I was doing fewer of these tasks, even though I had not consciously articulated it in my mind, was because they bought motivation but at a price in terms of time and distraction from the historical learning intention. If my purpose as a history teacher was to build historical knowledge and understanding such tasks tended to only really showcase that grasp when it was already present. Whether I chose a radio interview or a diamond nine, a debate or even an essay as my consolidation task I was still no further towards my goal of building a really strong grasp of the history necessary to perform well in that task.

I realised that the final task itself is the easy bit. It is the teaching that goes before that makes it all possible (or means it will flop).

No matter how motivational the activity the challenge remains. How do I help my students gain a broad and deep understanding of the period we are covering? How do I ensure my students remember what they have learnt? When I teach Weimar Germany at GCSE my biggest effort is not put into devising a game of Weimar Monopoly because that is unlikely to help me do the really difficult bit of teaching this topic which is the careful sequencing of ideas and concepts that I have identified through my planning as crucial for understanding. I need to identify which concepts to explain and how to build on what the class already know. I must choose and find ways to emphasise specific content and causal connections. To really understand Germany at this time the story starts with the Kaiser and prewar Germany and with this come dictatorship, revolution, democracy and communism. Then onto proportional representation, left and right wing and constitutions as we learn about the problems of the fledgling republic. We discuss how a state has power and why our country doesn’t have the same problems. I am the brick layer carefully placing every new slab of understanding with deliberate intent and care. I also need to find time to read more myself so that my teaching is rich and insightful and includes fascinating details so my class are motivated by what is intrinsically interesting about the period not just, as with my daughter, the fact that they played a game in class.

As professionals we have to make teaching decisions everyday balancing motivation against efficacy and we all sometimes make pragmatic decisions to include activities even when they are not the most efficient method of grasping the necessary details. We know, however, that our goal as teachers isn’t just to show our classes a good time. By comparison with the challenge of building genuine understanding of the historical period, choosing the specific activity is the easy bit.

One approach to regular, low stakes and short factual tests.

I find the way in which the Quizlet app has taken off fascinating. Millions (or billions?) has been pumped into ed tech but Quizlet did not take off because education technology companies marketed it to schools. Pupils and teachers had to ‘discover’ Quizlet. They appreciated it’s usefulness for that most basic purpose of education – learning. The growth of Quizlet was ‘bottom up’ while schools continue to have technological solutions looking for problems thrust upon them from above. What an indictment of the ed tech industry.

There has been a recent growth of interest in methods of ensuring students learn long term the content they have been taught. This is in part due to the influence of research in cognitive psychology but also due to some influential education bloggers such as Joe Kirby and the changing educational climate caused by a shift away from modular examinations. Wouldn’t it be wonderful if innovation in technology focused on finding simple solutions to actual problems (like Quizlet) instead of chasing Sugata Mitra’s unicorn of revolutionising learning?

In the meantime we must look for useful ways to ensure students learn key information without the help of the ed tech industry. I was very impressed by the ideas Steve Mastin shared at the Historical Association conference yesterday but I realised I had never blogged about my own approach and its pros and cons compared with others I have come across.

I developed a system of regular testing for our history and politics department about four years ago. I didn’t know about the research from cognitive psychology back then and instead used what I had learnt from using Direct Instruction programmes with my primary aged children.

Key features of this approach to regular factual testing at GCSE and A level:

  • Approximately once a fortnight a class is given a learning homework, probably at the end of a topic or sub topic.
  • All children are given a guidance sheet that lists exactly what areas will come up in the test and need to be learnt. Often textbook page references are provided so key material can be easily located.

AAAAA Test

  • The items chosen for the test reflect the test writer’s judgement of what constitute the very key facts that could provide a minimum framework of knowledge for that topic (n.b. the students are familiar with the format and know how much material will be sufficient for an ‘explain’ question.) The way knowledge has been presented in notes or textbook can make it easier or more difficult for the students to find relevant material to learn. In the example above the textbook very conveniently summarises all they need to know.
  • The test normally takes about 10-15 minutes of a lesson. The test is always out of 20 and the pass mark is high, always 14/20. Any students who fail the test have to resit it in their own time. We give rewards for full marks in the test. The test writer must try and ensure that the test represents a reasonable amount to ask all students to learn for homework or the system won’t work.
  • There is no time limit for the test. I just take them in when all are finished.

I haven’t developed ‘knowledge organisers’, even though I can see the advantages of them because I don’t want to limit test items to the amount of material that can be fitted onto one sheet of paper. Additionally, I’ve always felt a bit nervous about sending the message that there is something comprehensive about the material selected for testing. I’ve found my approach has some advantages and disadvantages.

Advantages of this approach to testing:

  • It is regular enough that tests never have to cover too much material and become daunting.
  • I can set a test that I can reasonably expect all students in the class to pass if they do their homework.
  • The regularity allows a familiar routine to develop. The students adjust to the routine quickly and they quite like it.
  • The guidance sheet works better than simply telling students which facts to learn. This is because they must go back to their notes or textbook and find the information which provides a form of review and requires some active thought about the topic.
  • The guidance sheet works when it is clear enough to ensure all students can find the information but some thought is still necessary to locate the key points.
  • Test questions often ask students to use information in the way they will need to use it in extended writing. For example I won’t just ask questions like “When did Hitler come to power”. I will also ask questions like “Give two reasons why Hitler ordered the Night of the Long Knives”.
  • Always making the test out of 20 allows students to try and beat their last total. The predictability of the pass mark also leads to acceptance of it.
  • Initially we get lots of retakers but the numbers very quickly dwindle as the students realise the inevitability of the consequence of the failure to do their homework.
  • The insistence on retaking any failed tests means all students really do end up having to learn a framework of key knowledge.
  • I’ve found that ensuring all students learn a minimum framework of knowledge before moving on has made it easier to teach each subsequent topic. There is a lovely sense of steadily accumulating knowledge and understanding. I also seem to be getting through the course material faster despite the time taken for testing.

Disadvantages of my approach to testing:

  • It can only work in a school with a culture of setting regular homework that is generally completed.
  • Teachers have to mark the tests because the responses are not simple factual answers. I think this is a price worth paying for a wider range of useful test items but I can see that this becomes more challenging depending on workload.
  • There is no neat and simple knowledge organiser listing key facts.
  • We’re fallible. Sometimes guidance isn’t as clear as intended and you need to ensure test materials really are refined for next year and problems that arise are not just forgotten.
  • If you’re not strict about your marking your class will gradually learn less and less for each point on the guidance sheet.
  • This system does not have a built in mechanism for reviewing old test material in a systematic way.

We have just not really found that lower ability students (within an ability range of A*-D) have struggled. I know that other schools using similar testing with wider ability ranges have not encountered significant problems either. Sometimes students tell us that they find it hard to learn the material. A few do struggle to develop the self discipline necessary to settle down to some learning but we haven’t had a student who is incapable when they devote a reasonable amount of time. Given that those complaining are usually just making an excuse for failure to do their homework I generally respond that if they can’t learn the material for one tiny test how on earth are they proposing to learn a whole GCSE? I check that anyone that fails a test is revising efficiently but after a few retakes it transpires that they don’t, after all, have significant difficulties learning the material. Many students who are weak on paper like the tests.

We also set regular tests of chronology. At least once a week my class will put events printed onto cards into chronological order and every now and then I give them a test like the one below after a homework or two to learn the events. I don’t have to mark these myself – which is rather an advantage!

AAAA Test Photo

 

I very much liked Steve Mastin’s approach of giving multiple choice tests periodically which review old material. Good multiple choice questions can be really useful but are very hard to write. Which brings me back to my first point. Come on education technology industry! How about dropping the development of impractical, rather time consuming and gimmicky apps. We need those with funding and expertise to work in conjunction with curriculum subject experts to develop genuinely useful and subject specific forms of assessment.  It must be possible develop products that can really help us assess and track success learning the key information children need to know in each subject.

The pseudo-expert

A week or so after our first child’s birth we met our health visitor, Penny. She was possibly in her early sixties and had worked with babies all her life. She was rather forthright in her advice but with the wisdom of 40 years behind her I was always open to her suggestions. Our baby refused to sleep in her first weeks. This meant I was getting one or two hours sleep a night myself and Penny’s reassuring advice kept me going. I can never forget one afternoon when our daughter was about 15 days old and Penny walked into our living room, taking in the situation almost immediately. “Now Heather,” she said, “I’m just going to pop baby in her Moses basket on her front. Don’t worry that she is on her front as you can keep an eye on her and if I roll up this cot blanket (deftly twisted in seconds) and put it under her tummy the pressure will make baby feel more comfortable…” Our daughter fell asleep immediately and Penny left soon after but SIX WHOLE HOURS later our baby was STILL sleeping soundly. She knew the specific risk to our baby from sleeping on her front was negligible and that it might just pull the parents back from the brink. I’m grateful to her for using her professional judgement that day.

Penny’s practical but sometimes controversial wisdom contrasted with the general quality of advice available at weekly baby clinic. Mums who were unable to think of an excuse to queue for Penny were told that ‘each baby was different’ and ‘mum and baby need to find their own way’. The other health visitors did dispense some forms of advice. If your baby wasn’t sleeping you could “try cutting out food types. Some mums swear its broccoli that does it” or “you could try homeopathy.”  The other health visitors had no time for Penny’s old fashioned belief that mothers could be told how to care for their babies. Instead of sharing acquired wisdom they uncritically passed on to mothers the latest diktats from on high (that seemed to originate from the pressure groups that held most sway over government) and a garbled mish-mash of pseudo-science.

A twitter conversation today brought back those memories. The early years teacher I was in discussion with bemoaned the lack of proper training for early years practitioners. Fair enough but what was striking was the examples she gave of the consequential poor practice. Apparently without proper training teachers wouldn’t understand about ‘developmental readiness’, ‘retained reflexes‘ or the mental health problems caused by a ‘too much too soon’ curriculum. The problem is that these examples of expertise to be gained from ‘proper’ training are actually just unproven theory or pseudo-science. The wisdom of the lady in her fifties who has worked for donkey’s at the little local day nursery is suspect if she is not ‘properly trained’. But trained in what? The modern reluctance to tell others how they should conduct themselves has created a vacuum that must be filled with pseudo-expertise masquerading as wisdom.

How often do teachers feel that they can’t point to their successful track record to prove their worth and instead must advocate shiny ‘initiatives’ based on the latest pastoral or pedagogical fads dressed up as science? The expert is far from always right but I value their wisdom. I also value the considered use of scientific research in education. Too often though these are sidelined and replaced with something far worse.

The EEF – is this the best we can do?

I’m all in favour of doing large scale empirical research in education and was pleased when the government funded the Education Endowment Fund to conduct randomised controlled trials on the effectiveness of educational strategies. “Good luck to them!” I thought. If finding answers in medicine is tough, how much harder in education where making your intervention uniform is fraught, isolating variables in a complex classroom environment is nightmarish and the children themselves all bring their unique experiences that may mean they respond differently to the same intervention. You may have heard of the famous multimillion dollar Star Study on the impact of class size on pupil progress. It was considered to be a methodological model for educational research — ‘which showed with exemplary technique that reducing class size will enhance equity and achievement in early grades.’ However, when California therefore invested billions cutting class sizes there was no impact… It seems the reason for the progress children had made in the Star Study had a more complex explanation than the research design had anticipated.

I assumed the EEF would be exemplary in its efforts to appreciate and work through the challenges of empirical research in education. However, my confidence in the worth of this endeavour has gradually ebbed away. For example, the EEF conducted a study reviewing the impact of a ‘Core Knowledge’ curriculum on comprehension. My research has been around this area and so I looked with interest at the EEF’s own literature review. I was rather taken aback. The EEF literature review on ‘Core Knowledge’ lasted only a few pages and extended no further than an outline of other research on the very specific curriculum idea. There was no wider context of the curriculum proposals outlined or acknowledgement of the broader significance of any findings. How was it that I, a lowly part-time MEd student, had identified so many significant areas that this government funded research failed to mention? It seemed the EEF funding only stretched to a rather ‘bargain basement’ approach to this aspect of the research report… But maybe it doesn’t really matter. Perhaps it is most important that the research is well conducted…

I disagree. It matters enormously. Even if the experimental structure of research is sound, if the intellectual structure is deficient the chance of a useful results is greatly reduced. Only researchers with real expertise in the actual area to be studied (not just general research design) can begin to anticipate the mass of variables that must be considered to produce really high quality research. At the very least this expertise should be gained through a very thorough study before embarking on research design, which would be reflected in a full literature review.

I looked at the EEF research on the impact of summer schools. It barely mentions the curriculum used as a likely explanation for variable outcomes of previous research. Similar problems are apparent in the EEF research on the programme ‘Philosophy for Kids’. Again the literature review is very brief. In particular the literature reviewer seemed unaware of the highly relevant and voluminous research in cognitive psychology on the likelihood of ‘far transfer’ of knowledge (i.e. skills developed learning philosophy transferring to widely different ‘far’ contexts such as improvements in maths and literacy). The research design takes no account of this prior work in designing the study and the report writer seems blissfully unaware that if the findings were correct (that ‘Philosophy for Kids’ did have an impact on progress in reading and maths) the impact on a whole field of enquiry would be seismic, overturning the conclusion of many decades of research by scores of cognitive psychologists on the likelihood of this sort of ‘far transfer’. Surely under these circumstances advising that this programme ‘can’t do any harm’ without even considering why the findings run contrary to a whole field of research, is foolhardy? (To say nothing of the tiny effect sizes found and serious questions about whether results simply show ‘regression to the mean’.)

I remember Stephen Gorrard, a well-respected researcher frequently used by the EEF for their studies, explained that he was a ‘taxi for hire’. He has conducted research for the EEF on an enormous variety of educational areas from reading instruction methods to the effectiveness of homework or of teaching philosophy to promote critical thinking in maths. His expertise is in research and can’t possibly extend to all these areas. People could devote their life to learning more about any one of these areas but surely, at the very least, research should be conducted in conjunction with the real experts in the particular field? There can be no comparison between the correlational findings of a researcher for hire and the ongoing efforts of experts that aren’t simply identifying correlations but have devoted years of their life to teasing out the causal mechanisms within even a narrow area of education. In a very relevant article E.D. Hirsch explained how Feynman highlights what this sort of painstaking research looks like:

Feynman described how one researcher managed with great persistence finally to obtain a reliable result in studying rats in a maze. Here is his description:

There have been many experiments running rats through all kinds of mazes, and so on — with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell. He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Hirsch goes on to argue that complexity means that teasing out deep-lying causal mechanisms from classroom research is hopeless. I do think there is a place for empirical classroom research but is it too much to ask that this is conducted by experts in specific areas, with fierce debate and active peer review, examining painstaking research over many years by academics driven by a desire to unpick those ‘deep lying causal mechanisms’? Did our knowledge of medicine progress by hiring free-lance researchers that dipped into whatever field they were hired to study, found a possible correlation (“won’t do any harm to try those leeches”) and then moved on?

I am not suggesting there is no place for smaller scale randomised controlled trials or questioning the obvious professionalism of those that have conducted them within the parameters requested. However, is this really the best we can do?