There’s no point abolishing rival exam boards.

‘There is a problem with dumbing down in exams.’

I think I’d agree with that statement but those that assume abolishing rival exam boards is a neat solution would be very disappointed if they got their way. There is a presumption that the reason exams seem easier is because exam boards are competing for business and dumb down their exams to attract teachers looking to get better grades for their students. Like all the best misapprehensions this presumption is probably grounded in some truth but detractors  would find out how little only when they see the continuation the problems they identify however many boards were touting for business. The planet brained Tim Oates offers a very wise analysis of probable reasons for grade inflation here:

Take the example in the Daily Mail today. Nick Gibb the schools’ minister is horrified at the inclusion of a very simple ‘spot the difference’ type question in the new Edexcel history GCSE that requires no historical knowledge to score 2 marks.  He makes the understandable presumption that the reason such a question would be included is due to a conscious decision by exam boards to make GCSE papers easier leading to them instructing their subject specialist exam paper writers to dumb down the new history GCSE with the cynical and explicit intention of thus attracting more punters (the schools).  The same presumption is widespread:

However, these very poor question types are not new. They are a normal feature of history GCSE exams. They are the sort of questions history teachers have been preparing children to answer for decades and that match current criteria for inclusion of sourcework in exams. If there was just one exam board who exactly would be employed to write the questions? The SAME people that write them now with exactly the same brief they currently work with and so it is reasonable to presume they would continue to write exactly the same same sort of questions they do now.

While the one question featured in the Daily Mail article was probably easy to score two marks on and almost certainly included to ensure the very weakest students score a few marks, you might be surprised how frequently intelligent answers don’t conform to the markscheme and thus score no marks. It  would also be a mistake to presume that these question types are generally easy because they require little knowledge of history. If a student answers the question in a way that does not conform to the markscheme they score no marks, however insightful their points. History teachers have to invest many hours training their pupils to answer a wide range of question types. A head of department at a recent meeting said she devoted 50% of teaching time to ‘skills’ which largely means training students to conform to markschemes. Despite this another commented that whether students had sat GCSE made little appreciable difference to how well they tackled  A level source questions because they tried to use what they know about markscheme demands from GCSE when A level papers required a different approach. If exam writers were serious about making papers easier it would much more effective simply to reduce the number of very different question types so teachers needed to spend less type drilling students to address a very wide range of unpredictable markscheme requirements.  Actually if we want to get rid of such easy questions we need to question the ideological and pedagogical presumptions such question styles are based upon and suggest better source question types such as here:

How do GCSE History source questions need to change?

Going down to one exam board will do nothing AT ALL in this direction.

Abolishing rival exam boards will in no way address these ongoing issues. All it will mean is that teachers will be unable to vote with their feet when it is clear that the approach the exam board have chosen requires more time training in markscheme requirements than teaching history. I moved our school to IGCSE a few years ago despite presuming it would mean the history was more challenging. Most teachers change boards when the  specification and exam on offer proves to be poorly designed or delivers highly unpredictable results. I am horrified at the suggestion that it would be better to have one board. At that point whoever is designing the qualification and writing the questions will have zero external  incentive to ensure what they produce is high quality and workable. At least now we can vote with our feet and often do.

The current calls for one board are based on misapprehensions, that exams and specifications are easy to write, that exam boards are always explicitly attempting to make them easier and that teachers generally move boards simply to ensure better grades rather than because there are quality issues with the board’s offering. In fact exam specifications and papers are incredibly difficult to get right. All sorts of apparently simple decisions can have wide ranging and unintended consequences to outcomes. See here:

Just look at the Ofqual findings on why there were problems with A level languages to appreciate this. I’m not exactly a neoliberal but in this case the market provides at least some pressure to improve quality and moving to one exam board offers no real solution to the problem of dumbing down.





A truism that needs questioning.

A truism that needs questioning: The importance of ‘high quality’ preschool education.

It is a truth universally acknowledged that young children, especially those not in possession of a good middle class upbringing, must be in need of ‘high quality preschool provision’. The phrase is on every politician’s lips. David Cameron is clear about this. Nicky Morgan, Tristram Hunt and Liz Kendall are sure it will create a skilled workforce of the future and Barack Obama has pumped countless dollars into ‘high quality’ preschool programmes in the belief that research shows that ‘high quality’ provision is the key to better life outcomes.

You might be surprised to learn ‘high quality’ has a very specific meaning that goes well beyond the common sense idea that some preschools must be better run than others. The National Audit Office commissioned a summary of the evidence on the impact of early years’ provision in which they explained that “In pre-school education (3+ years), quality is most often associated with the concept of developmentally appropriate practice

The English Early Years Foundations Stage statutory framework explains what is meant by ‘developmentally appropriate’ (i.e. high quality) practice for 0-5 year olds:

“Each area of learning and development must be implemented through planned, purposeful play and through a mix of adult-led and child-initiated activity.”

Everybody believes young children should play lots and can learn while playing. In England high quality provision does not just mean giving young children time to play it makes it statutory that the bulk of any learning must be through child initiated play. As the statutory framework explains:

“Children learn by leading their own play, and by taking part in play which is guided by adults.”

To be clear, if I want my four year old to learn to wash himself I could:

  1. Instruct him directly but that would be bad practice under the EYFS framework for the majority of learning goals (not really a high quality approach).
  2. I could play a game with him that involves washing. That would be ‘adult led’ play and only acceptable some of the time.
  3. Finally I could try and engineer a situation where my child is likely to want to play at washing himself (an ‘enabling environment’) and I should offer gentle nudges to ‘enrich’ his play in the right direction. That is ‘child initiated’ learning and is at the heart of what is meant by ‘high quality’ preschool practice.

Child initiated play is prioritised because it is believed it will facilitate the central goal of ‘high quality’ pre-schooling – character development. For example the ‘guiding principles’ of the English EYFS statutory framework are a series of dispositions. Children should become resilient, capable, confident and self-assured, strong and independent. This what is meant by the phrase ‘educating the whole child’.

What is the basis for this widely held view of ‘high quality’ pre-schooling?

For me this statutory definition of high quality pre-schooling was problematic for a number of reasons.

1. I’ve looked into character education and there seems a limited basis for the belief that the dispositions and skills which are the goals of this form of pre-schooling can be taught or if they can be inculcated, no real basis for the idea that child initiated learning is the way to do so. For example, it is statutory in English preschools to devise activities to build resilience. Angela Duckworth is viewed as an international authority on ‘grit’ but she admits that although it is a desirable trait we don’t really know for sure how to create it!

2.I taught my own young children to read, do maths, swim, wash, dress. They learnt maths to a high level without my engineering ‘enabling environments’ for child initiated learning.

3. This belief that high quality preschools are child-centred and ‘developmentally appropriate’ flies in the face of the enormous American state sponsored Project Follow Through. Follow Through found direct instruction pre-schooling delivered far greater cognitive gains over child centred approaches.

4.  The research by cognitive psychologists is pretty damning of the idea that developmentally appropriate practice is a good idea.

A report by the National Audit Office on the evidence for the impact of pre-schooling suggests the evidence base for the widely cited definition of ‘high quality’ is a small handful of very old and tiny studies, particularly one I have already written about, the highly flawed High/Scope Perry study which didn’t even find any long term cognitive benefits.

I thought there had to be a firmer basis for what amounts to an international education policy. I investigated further and did find lots of studies looking at the effect of pre-schooling on outcomes but it is hard to find any that policy makers would be interested in that provide a basis for how ‘high quality’ has been defined.

There is one very well-known and significant study that purports to do so. It is the ‘Effective Provision of Pre-School Education Project’ (EPPE), a large longitudinal study involving 3000 children and sponsored by the DfES. One of its aims was to identify the characteristics of an effective pre-school setting. The study involved careful classroom observation particularly with the most widely used measure of preschool classroom quality, the Early Childhood Environment Rating Scale (ECERS-R). The EPPE report explains the use of this ECERS-R measure:

“Matters of pedagogy are very much to the fore in ECERS-R. For example, the sub scale Organisation and Routine has an item ‘Schedule’ that gives high ratings to a balance between adult initiated and child initiated activities. In order to score a 5 the centre must have a balance between adult initiated and child initiated activities.”

Hold on, surely not? This very large government funded longitudinal study is aiming to identify high quality practice using a rating system which predefines what is meant by high quality! The ECERS-R rating system was developed in the late 1970s and is used extensively around the world to judge preschool quality. I spent some time looking for the evidence base for its assumptions. I found that the quality ratings were compiled by one of the creators, using her teaching experience. There has been some criticism of the ECERS-R. Gordon et al write:

“The ECERS and ECERS-R reflect the early childhood education field’s concept of developmentally appropriate practice, which includes a predominance of child initiated activities…a ‘whole child’ approach that integrates physical, emotional, social and cognitive development…there is surprisingly little empirical evidence of the validity of the ECERS-R instrument using item response models.”

Gordon et al explain that there is a fundamental problem with ECERS-R scoring system because statements that allow higher scores (indicators) are only counted if indicators of lower scores are met. However scales ‘mix dimensions’. I’ll explain. One of the scales a preschool is judged along, the ECERS10, includes indicators of nutrition (food served is of unacceptable nutritional value), caregiver child interactions (non-punitive atmosphere during meals), language (meals and snacks are times for conversation) and sanitation, among others! If the food is of unacceptable nutritional value the scorer cannot even judge items higher up the scale when they are really unrelated! Unsurprisingly researchers found ‘the category ordering assumed by the scale’s developers is not consistently evident.’ Interestingly they also found few associations between ECERS-R and child outcomes and they suggest ‘small correlations may be attributable, in part, to the low validity of the measure itself’.

So the best recent research in England on what is meant by ‘high quality’ preschool education, the EPPE longitudinal study, uses a measure which predefines quality. This measure has been widely used to define high quality but is based on a teacher’s observations and has questionable correlation with outcomes, unsurprising when you consider the scales mix dimensions. Finally the very best evidence the National Audit Office could find to justify the ‘developmentally appropriate’ definition of high quality was a tiny, highly flawed study from 50 years ago.

I don’t suppose politicians have any idea that they are endorsing a very particular ‘child centred/developmentally appropriate’ form of early education when they herald ‘high quality early education’ as the panacea for society’s ills or that there is little justification for actively endorsing this particular approach, let alone making it statutory. In fact, whatever, might be written about the findings of the EPPE study, the actually statistics endorse something much more like direct instruction. I talk more about the problems with EPPE/EPPSE here.

Other relevant posts on:

What a child initiated education looks like

The view of early years’ educationalists on direct teaching