The EEF – is this the best we can do?

I’m all in favour of doing large scale empirical research in education and was pleased when the government funded the Education Endowment Fund to conduct randomised controlled trials on the effectiveness of educational strategies. “Good luck to them!” I thought. If finding answers in medicine is tough, how much harder in education where making your intervention uniform is fraught, isolating variables in a complex classroom environment is nightmarish and the children themselves all bring their unique experiences that may mean they respond differently to the same intervention. You may have heard of the famous multimillion dollar Star Study on the impact of class size on pupil progress. It was considered to be a methodological model for educational research — ‘which showed with exemplary technique that reducing class size will enhance equity and achievement in early grades.’ However, when California therefore invested billions cutting class sizes there was no impact… It seems the reason for the progress children had made in the Star Study had a more complex explanation than the research design had anticipated.

I assumed the EEF would be exemplary in its efforts to appreciate and work through the challenges of empirical research in education. However, my confidence in the worth of this endeavour has gradually ebbed away. For example, the EEF conducted a study reviewing the impact of a ‘Core Knowledge’ curriculum on comprehension. My research has been around this area and so I looked with interest at the EEF’s own literature review. I was rather taken aback. The EEF literature review on ‘Core Knowledge’ lasted only a few pages and extended no further than an outline of other research on the very specific curriculum idea. There was no wider context of the curriculum proposals outlined or acknowledgement of the broader significance of any findings. How was it that I, a lowly part-time MEd student, had identified so many significant areas that this government funded research failed to mention? It seemed the EEF funding only stretched to a rather ‘bargain basement’ approach to this aspect of the research report… But maybe it doesn’t really matter. Perhaps it is most important that the research is well conducted…

I disagree. It matters enormously. Even if the experimental structure of research is sound, if the intellectual structure is deficient the chance of a useful results is greatly reduced. Only researchers with real expertise in the actual area to be studied (not just general research design) can begin to anticipate the mass of variables that must be considered to produce really high quality research. At the very least this expertise should be gained through a very thorough study before embarking on research design, which would be reflected in a full literature review.

I looked at the EEF research on the impact of summer schools. It barely mentions the curriculum used as a likely explanation for variable outcomes of previous research. Similar problems are apparent in the EEF research on the programme ‘Philosophy for Kids’. Again the literature review is very brief. In particular the literature reviewer seemed unaware of the highly relevant and voluminous research in cognitive psychology on the likelihood of ‘far transfer’ of knowledge (i.e. skills developed learning philosophy transferring to widely different ‘far’ contexts such as improvements in maths and literacy). The research design takes no account of this prior work in designing the study and the report writer seems blissfully unaware that if the findings were correct (that ‘Philosophy for Kids’ did have an impact on progress in reading and maths) the impact on a whole field of enquiry would be seismic, overturning the conclusion of many decades of research by scores of cognitive psychologists on the likelihood of this sort of ‘far transfer’. Surely under these circumstances advising that this programme ‘can’t do any harm’ without even considering why the findings run contrary to a whole field of research, is foolhardy? (To say nothing of the tiny effect sizes found and serious questions about whether results simply show ‘regression to the mean’.)

I remember Stephen Gorrard, a well-respected researcher frequently used by the EEF for their studies, explained that he was a ‘taxi for hire’. He has conducted research for the EEF on an enormous variety of educational areas from reading instruction methods to the effectiveness of homework or of teaching philosophy to promote critical thinking in maths. His expertise is in research and can’t possibly extend to all these areas. People could devote their life to learning more about any one of these areas but surely, at the very least, research should be conducted in conjunction with the real experts in the particular field? There can be no comparison between the correlational findings of a researcher for hire and the ongoing efforts of experts that aren’t simply identifying correlations but have devoted years of their life to teasing out the causal mechanisms within even a narrow area of education. In a very relevant article E.D. Hirsch explained how Feynman highlights what this sort of painstaking research looks like:

Feynman described how one researcher managed with great persistence finally to obtain a reliable result in studying rats in a maze. Here is his description:

There have been many experiments running rats through all kinds of mazes, and so on — with little clear result. But in 1937 a man named Young did a very interesting one. He had a long corridor with doors all along one side where the rats came in, and doors along the other side where the food was. He wanted to see if he could train the rats to go in at the third door down from wherever he started them off. No. The rats went immediately to the door where the food had been the time before.

The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell. He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.

Hirsch goes on to argue that complexity means that teasing out deep-lying causal mechanisms from classroom research is hopeless. I do think there is a place for empirical classroom research but is it too much to ask that this is conducted by experts in specific areas, with fierce debate and active peer review, examining painstaking research over many years by academics driven by a desire to unpick those ‘deep lying causal mechanisms’? Did our knowledge of medicine progress by hiring free-lance researchers that dipped into whatever field they were hired to study, found a possible correlation (“won’t do any harm to try those leeches”) and then moved on?

I am not suggesting there is no place for smaller scale randomised controlled trials or questioning the obvious professionalism of those that have conducted them within the parameters requested. However, is this really the best we can do?

6 thoughts on “The EEF – is this the best we can do?

  1. I also wonder whether their research findings regarding the apparent ineffectiveness of keeping pupils back a year are as robust as they claim.

  2. Thank you for this blog post. It did need saying. We are becoming very complacent in our acceptance of the ‘research’ particularly when it has the weight of an organisation like the EEF behind it. Much of it is very selective, questionable and I worry that some of it is politically tinged. For example research on music and attainment is not mentioned (not the same as ‘the arts’). I wonder why.

Leave a comment