Wednesday, 1st June, 2011

Right, I haven’t blogged for a while, but thought I’d just post here an extract from a speech I made just after Christmas about what can be read into English Sats results for 11-year-olds.

I’ve been prompted to do this after reading, over the last two days, the Evening Standard’s coverage of what it claims is a literacy crisis in London.

Yesterday, part of its front-page coverage talked about one in four children being “practically illiterate”, seemingly based on the proportion of pupils achieving level 3 or below in English Sats.

Today, it highlighted the number of pupils “with a reading age of seven”, based I think on the numbers achieving level two or below. (The normal level said by the Government to be the expected reading standard of a seven-year-old).

I don’t think test statistics can support the interpretation being put upon them. It may be that we have a literacy problem in the capital, or in the country as a whole. But the test data used as a good part of the news hook for the coverage don’t do a good job about telling us the nature of the problem. It’s probably not helped in that news coverage often fails to put the numbers in perspective. Ideally, it would give  us unsensationalised  information on whether the statistics are on an upwards, downwards or static trend, and what information we have about how this country compares to others, but this tends not to happen.

Anyway, here’s the extract of that speech, prompted in part by similar coverage on the Today programme before Christmas.

I want to talk about the over-interpretation of test results: they don’t tell us nearly as much as we might think they do. Perhaps just as importantly, we don’t use the data, in our public debate around education, really to understand what is going on in schools or with pupils’ learning, and in that sense we are letting children down because we should be using assessment information in a far more sophisticated way, I think. And bear with me, as I am going to have to go into a bit of detail here.

So, I’ll just start with a question: What is the definition of the level of understanding expected of an 11-year-old in reading? How is this defined by the government, by the media, and thus by people nationwide in the debate about this vitally important subject?

What does it mean, within the detail of what children have to achieve, for them to perform at that level?

Well, in 2010, it came down to this: the ability of a child to score 18 marks out of 50 in a one-off 45 minute test, taken by most pupils as they come to the end of their primary school years.

That is the number of marks needed to secure level four in reading, the Government expectation, and represents the entire official judgement on that pupil’s ability in reading over the past four years.

If a child scored 30 marks out of 50 in last year’s tests, they would have achieved a level five in reading, which statistically and according to the interpretation we are expected to put on these data, is the level of proficiency expected of a 14-year-old. If they scored between 11 and 17 marks, they would be at level 3.

That is it. Nothing else counts in official estimations of what it means to be able to read. Our entire primary education system –at least so far as reading is concerned, hinges around the proportion of pupils achieving these expectations and pass marks, which are very closely bunched, in a one-off test one day in May.

I highlight the case of reading because it came up in coverage shortly before Christmas by the Today programme. It led its broadcasts one morning with claims that “thousands of boys start secondary school only able to reach the reading standards of seven-year-olds or below”.

This was based on a technically accurate interpretation of figures generated by national test data, but which led me to question why people are putting such huge weight on figures which, if you step back from this for a second and think about the detail of what these data mean, cannot support this interpretation.

Today had obtained figures – released every year – which showed that in 2010, 10 per cent of pupils obtained below a level three in the reading test. This means that they either scored 10 marks out of 50 – 20 per cent – or below on the tests, or did not even take them.

The logic of Today’s argument was this. Pupils scoring below level three in the reading test have scored level two at best. Level two is the performance technically expected of a seven-year-old in the tests pupils take at this age. So the 10 per cent of boys failing to achieve level three are performing at the level expected of a seven-year-old.

This finding, which suggests a serious problem – implying, I would venture, to many listeners, that many boys are wasting years at school making no progress – is viewed as a national scandal; it, at least, is very serious for these boys.

Consider, though, more detail on how these data are generated. A child could fail to achieve a level three with 10 marks out of 50. But with another eight marks – 18 out of 50, or 36 per cent, these boys would have achieved a level four, in line with government expectations of an 11-year-old.

The difference between having the reading age of a seven-year-old, then, around which national debate centred, and that of an 11-year-old turns out to be eight marks on one 50-mark test. Put it another way, a seven-year-old who took this reading test could have scored 10 marks and be said to be performing in line with expectations for their age.

If they took a similar test four years later as an 11-year-old and scored 18 marks, then they would be deemed to be doing as well as expected for an 11-year-old. Thus, four years’ progress in reading could be said to come down to the ability to improve by two marks a year in a 50 mark test.

I went into some detail in this example to illustrate the difficulties we have in the way test data are being used. Believe me, I am not trying to minimise this problem: if a large number of boys really cannot read, it is a serious national issue.

The trouble is, I don’t think test data, and in addition to a certain extent the way they are reported, are helping us understand the nature of that problem and thus to do something about it.

Consider again the interpretation of the figures around which the Today programme that morning revolved, including an interview with Michael Gove, the Education Secretary.

The lead headline on the programme’s website read “Gove: 11-year-old illiteracy ‘unacceptable’”. John Humphrys, the presenter, also used the term “illiteracy”.

But, in fact, the test data actually tell us nothing about “illiteracy”. They don’t tell us whether the number of boys quoted in the programme actually are “illiterate” – can’t read or decode text – or whether their problems are different from that.

Strictly, they tell us only that a number of boys either couldn’t score a certain number of marks in a one-off reading comprehension test (further scrutiny of the government data shows 4 per cent were entered but didn’t achieve level 3), or their teacher did not enter them for such a test because they believed they would not pass (5 per cent), or that they simply missed the test(1 per cent).

We don’t know, then, from the test data, whether the problem with these children – is a) genuine inability to decode text – although the fact that nearly half of them scored some marks on this test would suggest this was not the issue for these children b)problems with reading for comprehension (ie they can actually read the words, but they don’t really understand either what they mean or what the question is asking) or c) a failure to cope with the format of being tested.

There is, of course, another explanation: that these children scored below their “true” level of understanding through having an “off-day” or just being unlucky: there will always be measurement uncertainty and inaccuracy in a one-off test.

If we don’t know what these figures actually mean, how can we do anything to help children to genuinely improve? Is what the nation needs a greater emphasis on helping children with decoding, as is suggested through the introduction of a new phonics test, or more work on comprehension, for example? The test data give us no answer.

It also was not reported – and generally isn’t – that we do know that substantial numbers of pupils in this category of failing to reach level three have special educational needs: by my calculations from government data, seven in 10 children who failed to reach level 3 in English in 2010 were classed as having a special need. Nearly 10 per cent of those failing to reach level three are classed as autistic; a further seven per cent have specific learning difficulties; 10 per cent have communications needs and a further 10 per cent have behavioural, emotional and social difficulties. None of these figures were presented in the Today programme reporting.

Neither, by the way, was any international context given: boys’ reading is a problem around the world, as last month’s Organisation for Economic Co-operation and Development PISA study showed. It included the following quote: “Across OECD countries, 24 per cent of boys perform below level 2[at the bottom of six levels of the PISA reading tests], compared to 12 per cent of girls. Policy makers in many countries are already concerned about the large percentage of boys who lack basic reading skills.”

The fact that these test data – and sometimes the reporting around them – allow us only a very superficial, decontextualised understanding means that we really are letting down pupils and the education system as a whole.

We could do so much better. For not only does the accountability system which centres on pushing schools to raise these test numbers – through league tables, targets, Ofsted inspections and the rest of the centrally-created performance apparatus – encourage schools to spend months trying to drill pupils to achieve, if you look in detail at what the figures mean, a few extra marks on one-off English and maths tests. We also lack the understanding – in terms of the national data that these test figures generate – both to help these pupils do better – ie to work out what it is they can and cannot do – and to help the system as a whole to improve.

Today presented the problem as a hugely serious issue for the nation. But we are not taking it seriously at all if this is the level of analysis being offered.

If Sats are the height of our ambition in assessment – and there are still signs, under the new government that this is what pupil progress will revolve around – we really have a problem, then. We need to look at the use of much more sophisticated and useful measures of children’s understanding, both from the point of view of helping the individual child improve, and from the point of view of getting a much better understanding of what is really happening nationally.

 The rest of this speech, to a meeting held in Parliament in January to launch a joint Association of Teachers and Lecturers/National Association of Head Teachers/National Union of Teachers pamphlet on assessment and accountability to which I contributed, went on to talk about problems of the washback effect on teaching of high-stakes test-based accountability, with which readers of this blog will be familiar.

- Warwick Mansell

1 Comment
posted on June 1st, 2011

One Response to “What national tests tell us, or not, about children’s reading ability”

john quicke

June 17th, 2011 - 3:49 pm

It is worth remembering that Level 2 in reading is defined as follows ( I quote from SCAA document 1994): ” Pupils, reading of simple texts is generally accurate and shows understanding. They express opinions about major events or ideas in stories, poems and non-fiction. They use more than one strategy (phonic, graphic, syntactic and contextual) in reading unfamiliar words and establishing meaning”. Is this illiteracy?

Leave a Reply