Archive for August, 2009

Thursday, August 27th

Do today’s GCSE results, which show another improvement to a new record, amount to testimony that our students and teachers are working harder than ever before and that education standards are rising, or that the papers are getting easier?

It is the routine question at this time of year. And the honest answer is that there is no answer. The statistics, the principal means by which the Government is supposedly being held to account for its education performance, provide no good way of answering the question as to whether underlying standards of teaching and learning are rising or falling.

Put together with immense care and technical expertise by exam board professionals who are among the most sophisticated in the world at this business, the stats do a pretty good job of sorting out the A grade student from the C grade, and the C grade from the F grade.

But they cannot tell us whether education standards are rising or falling. Why not? Because too much about the exams system has changed over the years. As some experts in this field say: “If you want to measure standards, don’t change the measure.”

Among the alterations in the past five years have been the scrapping of compulsory language study to 16 – which might have been expected to improve grades, as pupils utilise more choice over which subjects they study – a move towards more modular courses, and, as today’s figures show, increased “gaming” of the system by schools in encouraging pupils to take subjects including maths early, with the chance to resit later on in the course.

As Mike Cresswell, the director general of the AQA board who led the GCSE results press conference this  morning, argued in a paper a few years ago, “The statistics of public examinations cannot, and do not, provide objective or unequivocal quantitative measures of temporal changes in educational standards.”

You could say, then, that it is a mistake even to attempt to answer the question with which this blog began. This could be argued to be true particularly because curricula changes mean that the content of what is learnt in schools from one era to the next is not strictly comparable. And shouldn’t we just stand back at this time of year and allow the students to receive their grades in peace?

This is, however, never going to happen so long as the Government makes the grades themselves a judgement not only of what pupils have gained from their education, but of its own performance. To put out national figures at the same time that students receive their results and then ask the media just simply to go along with Government claims that they are unequivocal testament to rising standards is to ask the media not to do its job.

If those in charge were really committed to addressing claims that this debate undermines pupils’ achievements, they could do two things. First, they could move the publication of national results until some weeks after students had got their grades, which might separate the argument about national standards from the time when teenagers are given their verdicts. Or second, and much the better long-term solution, they could depoliticise this debate by introducing another measure of national education standards, such as getting a small sample of pupils to be assessed every year outside of the GCSE and A-level system.

That second approach go a long way to stopping the production of ever-rising GCSE and A-level grades as being seen as an end in itself, to be aimed for whatever the means taken to achieve them. The system by which ministers set themselves targets, then watch as league tables, performance management and the Ofsted, school improvement partner and Fischer Family Trust regime of statistically-driven bullying all put huge pressure on schools to help them meet them, is hugely damaging to education, with very few powerful people in this regime asking what the results really mean, and the measures being taken to raise them.

The side-effects, of course, are legion, as my book and this website has attempted to document. The most obvious, now, is that the last four years of most students’ lives at school could be seen as one long exercise in exam preparation, as they go through module after module of GCSEs and A-levels. Modularisation, to use a horrible word, may be the right way to go in educational terms – I am to some extent agnostic on it. But it has developed to the degree it has because it seen to be more likely to produce good grades, whatever they mean, and whatever the educational side-effects. It is true that some within the exam boards are concerned about this trend. But anyone trying to resist it is facing an uphill battle, given the pressures on schools over results.

This is particularly unfortunate because not only are the results not useful as a measure of underlying education standards, the grades drive may not even, in its own right, help students, since raising scores may just mean that employers and universities simply increase the grades they demand of new recruits.

Neither does the advent of the results-are-everything culture seem to have achieved much in terms of improving social mobility: while the proportion of A grades for pupils from comprehensives has improved at both GCSE and A-level in recent years,  the rate has improved by at least as much in private schools, from a higher base.

Students should not be too depressed today. Good grades still reflect hard work. But this emphasis on grades as ends in themselves, for the nation as a whole, is a grave national mistake.

- Warwick Mansell

No Comments
posted on August 27th, 2009

Tuesday, August 25th.

A report on mathematics teaching in secondary schools offers some further disturbing insights into how the push for better reported grades for schools (and their pupils) can come at the expense of building genuine understanding of a subject.

The “Evaluating Mathematics Pathways” interim report was carried out for the Qualifications and Curriculum Authority, to whom it reported in April. It was set up to investigate changes to the qualifications structure in maths.

The report says: “One of the most significant challenges to improving learner experiences in mathematics classrooms is the effect of high-stakes external assessment on the experienced curriculum, particularly the ways teachers are compelled to behave in response to performative pressures.”

 The report then sets out some detail. Schools which are under pressure to raise the proportion of their pupils achieving C grades or better have experimented with different strategies for entering these students for the exam. For example, pupils are encouraged to complete the GCSE by the end of year 10 and then entered with the hope of securing a grade C. Or borderline students are entered for the more difficult “higher tier” papers, covering grades A*-C, and then told they only need to worry about completing the easier questions on the paper. (Presumably because through this method it is easier to get a C grade than by attempting the easier set of papers, where more marks are needed for a C).

The report says: “One of the main implications of the above entry strategies is the very likely scenario that fewer students will get sustained and meaningful engagement with those aspects of the programme of study typically assessed at higher tier.” Crucially, for those wanting to study the subject at A-level, this includes algebra, which is the foundation for much success post-16. Students who had seen their results at GCSE boosted in this way could then move on to A-level and find they struggled, because they had not mastered the subject. The report said that this tendency was particularly strong in 11-16 schools, which do not have to face the consequences of these strategies in the sixth form.

It concludes: “QCA should alert DCSF to the risks in maintaining and widening participation in the study of mathematics post-16 associated with the accountability measure of grade C in maths in the achievement and attainment [league] tables. There is evidence of increased early entry [of pupils] in order to ‘bank the grade C’, which may be particularly detrimental to transition issues at age 16.”

The findings were taken up in another independent report, by the curriculum development body Mathematics in Education and Industry. It says: “Some schools are now entering pupils for GCSE at the end of year 10 hoping to obtain a grade C. This practice is new and seems to be, at least in part, a response to the accountability requirement for mathematics.”

The Nottingham report also raises questions about exam boards’ move to offer GCSEs in “modular” form, where papers are taken throughout a course and there can be resits, rather than altogether at the end.

It says: “[Modular GCSEs] have been used by many centres as a means of raising attainment but do not necessarily improve levels of algebraic competence or mathematical understanding. We have also been told that graduated assessment [of this sort] tends to hinder teaching for progression in a topic.”

Looking into this issue, I also came across a joint document from the two leading subject associations for maths to last year’s Rose Review for the Government of the primary curriculum. (Available from here).

This mentions the impact of high-stakes testing on their subject, which Sir Jim Rose was barred by the Government from considering, a move the submission describes as “ridiculous”.

It says: “The high stakes assessment cannot be ignored – it is the most significant factor which limits the improvement of teaching and learning in primary mathematics.”

It adds: “Although the term ‘raising attainment’ is in common parlance, it was felt that the narrow interpretation of this to mean higher test results skews the teaching and learning in schools. The goals should be to improve teaching in order to improve learning.”

Finally, in my round-up of recent evidence, a report last week from Edge, the educational charity, also bemoaned the effects of teaching to the test. See this. The survey results themselves are available here.

My article for today’s Guardian on teachers reporting choosing exam boards on the basis of how ‘easy’ they are is here.

 

 

- Warwick Mansell

No Comments
posted on August 25th, 2009

Thursday, August 20th, 2009

I’ve just returned from the annual A-level results conference, where the heads of England’s three major exam boards present the yearly grade statistics.

For the last three years, much of this has been taken up with an elaborate  and detailed attempt to take some of the heat out of the ritual dumbing down debate.

And I have to say that, despite having a great deal of respect for those running the exam boards, I find this exercise in explaining away what are in some cases valid criticisms of the system a tad unconvincing.

In what has now become a well-established pattern at these early-morning August get-togethers in Westminster, Dr Mike Cresswell, director general of the largest board, AQA, takes centre stage. He then presents detailed charts to show that, while results have indeed improved steadily over recent years, different regions of the country, and different types of school, have improved at different rates.

So, for example, while the proportion of A grades in London has risen by more than seven per cent since 2002, in the North East of England it has improved by less than four per cent.

Similarly, the proportion of A grades in the independent sector has risen over the same period by nearly 12 per cent. In state grammars, the rise was nine per cent; in state colleges, five per cent; in comprehensives, also five per cent; and in secondary moderns, two per cent.

Dr Cresswell’s argument is that, because different types of school and different areas of the country have progressed at different rates, any crude or “naive” arguments that improvements in results are a product of uniform dumbing down are dispelled. Since, by implication, if this were true, all parts of the country and all types of school would have progressed by a similar rate.

While the figures unveiled in this analysis are thought-provoking in their own right, I can’t see that they disprove any argument that there may be some underlying dumbing down, or slipping of standards, to use a less loaded term, going on.*

For, surely, the fact that different types of school and area of the country have varying rates of improvement does not negate the sense that some underlying trend is at play nationwide.

To use an analogy, if you looked at UK house price rises over the years leading up to the peak of the property market in 2007, you would undoubtedly see variations in the degree to which prices had risen in different parts of the country.

But this would not disprove the existence of background factors, apparent nationwide, that might help to explain part of that rise in prices in all areas of the country. An example could be, for example, the ready availability of mortgage credit, which had tended to inflate the market nationally.

Similarly, the fact that global warming might lead to differing increases in temperature in different areas of the planet would not, surely, be taken by scientists to disprove the overall tendency that the earth is warming up.

Or, to take Dr Cresswell’s argument to its logical conclusion, the only way we could ever provide convincing evidence of dumbing down would be if we could show that all parts of the country, and in all types of school (and, presumably, for all categories of student) had improved by a uniform rate in recent years.

I have had little statistical education beyond A-level maths, and Dr Cresswell’s knowledge of both statistics and the exam system is vast. It may be that I have not grasped the full ramifications of this explanation. But it does come across to me as a bit of a smokescreen in what is quite a complex debate.

* (For the record, I don’t subscribe to any notion that exam boards are rushing around crudely lowering grade boundaries to cut standards. But I do think that some complex processes are at work which might tend to make it easier for a student who has mastered the subject to a given level to get a better grade now than they would have got a few years back).

- Warwick Mansell

No Comments
posted on August 20th, 2009

Thursday, August 13th

Another defence of high-stakes testing from Conor Ryan, the former education adviser to David Blunkett and Tony Blair, in today’s Independent.

The article, though well-drafted as usual, is weak in several ways.

Apart from leading in on the one percentage point drop in English results, which as pointed out last week really means nothing, Mr Ryan offers an unconvincing explanation for why the test results are higher than they were in the 1990s, and why the data has shown little such improvement in recent years.

He argues:

“Around 175,000 more youngsters still reach the expected level each year than did so 14 years ago, a result of the combination of accountability and pressure that has accompanied the tests, including Labour’s literacy and numeracy strategies.”

And: “The most important feature of the years between 1995 and 2000, when results rose rapidly, was single-minded momentum. Schools were in no doubt that their top priority was the 3Rs.”

Well, the most likely reason for the improvement is a combination of three factors: teachers becoming more familiar with the requirements of what were in 1995 a completely new set of tests;  the introduction of the numeracy strategy into maths lessons; and a slipping in test standards.

The first factor has been well-documented following the introduction of tests around the world. The third, which was investigated comprehensively in research commissed by the Qualifications and Curriculum Authority and belatedly published in 2003, also suggests a slipping of standards in the years 1995 to 2000, at least in English. Since standards were tightened, the results improvements in that subject have indeed flattened off. The same study found that there had not been a corresponding slipping of standards in maths. This, and other evidence, suggests to my mind that the gain in maths has been genuine. But the much more persuave explanation for that fact is the introduction of the numeracy strategy itself (factor two), rather than the vague policy “momentum” suggested by Mr Ryan. 

He also implies that schools are now in doubt that “their top priority [is] the 3Rs”. This is laughable. While it could be argued that some recent policy initiatives, such as the proposed introduction of the school report card, measuring wider aspects of school life than academic results, have not been focused exclusively on 3Rs standards, the reality of school life is that test results remain the be-all-and-end-all for school leaders in particular. In fact, the emphasis on them in the accountability system has undoubtedly increased between the end of Ryan’s time as a policy adviser at the Department for Education and Skills, in 2001, and now. As I reported in a TES article last year, Ofsted inspection judgements in recent years have been driven almost entirely by test results in those three subjects, and league tables and targets remain as influential as they ever have. While Mr Ryan may bemoan a slight change of emphasis in policy discussions since the Brownites took over from his Blairite friends, the reality at primmary school level has not altered much at all.

There are other elements of the article with which to quibble, not least the conclusion, which says that children would not be better taught without independently set and marked tests [of the current type]. One of the best arguments against this has been the move of private schools away from Sats tests in recent years. But I come back to a familiar argument against this: is it really the best use of the time of pupils in year 11 that they should spend months being drilled for a test which, despite the arguements in this latest piece, are  not important in themselves to pupils’ futures. Mr Ryan, and those making similar arguments, need to be more imaginative when it comes to thinking out alternatives.

- Warwick Mansell

No Comments
posted on August 13th, 2009

I woke this morning to quite a dispiriting discussion on the Today programme.

The topic was social mobility, and the presenter, Ed Stourton, was interviewing Lee Elliot Major of the Sutton Trust charity about a report the trust is about to publish. This will show, it was said, how independent school pupils with similar grades to those from the state sector are “far more likely to apply to leading research universities”.

So far, fair enough, and I should say at the start that I like Ed Stourton and think the Sutton Trust does good work in highlighting the issue of the dominance of leading universities and many of the professions  by the privately educated.

But what depressed me about the interview was that the two of them seemed to arrive at a consensus that there was only one set of people responsible for this national scandal: teachers in state schools. This view was not challenged.

Dr Elliot Major stated that the trust’s research had found that some state teachers would not recommend their pupils for Oxbridge, even if they had good grades, while some schools viewed “gifted and talented” schemes, which aim to identify and nurture academic pupils, as elitist. Therefore, much of the fault lay with the schools.

I have no grounds to quibble with the trust’s analysis of what goes on in at least some schools. Raising aspirations, if they need to be raised, is important. But it does seem to me that to try to pin all or even most of the “blame” on to teachers and schools for what is a very complex problem is effectively to trivialise it. The Today programme should have sought out alternative viewpoints.

 Why is this complex? Well, I think that simply looking at varying application rates for Russell group universities between different categories of pupils and viewing this, in itself, as scandalous is simplistic. It may also say just as much about the prejudices of those involved in this debate as it does about the policies and views of schools.

In my view, it is not necessarily always the case that, even for an academic child, their best choice in life at 18 will be to go to a Russell Group university. It might be the case that a child from a more privileged background who applies to go to university does so not because this is the best for them in the long run, but because their parents would not countenance anything other than university for them. Similarly, a teenager from a state school may have looked at the option of going to university and then considered a potentially well-paid alternative career such as plumbing and decided university was not for them. The two parties in this interview imply that the latter person is simply wrong, and that they have not been given the correct “guidance”. This is patronising and implies a view of the world that university is always the better option.

The powerful forces of class and culture should also not be overlooked in this debate. Ed Stourton at one stage expressed surprise that anyone, in this day and age, could be put off university by thinking it “not for them”. Well, in turn I was astonished by this reaction. Despite the undoubted good work that Oxbridge and other selective universities do in trying to encourage applications from those who might not be pushed to apply by their parents, for me it’s far from surprising why a child from a less well-off background might feel slightly intimidated by an Oxbridge quad. For a public school pupil, these surroundings might just be an extension of the school buildings with which many of them will be familiar. To understate hugely, this is not the case for someone from an inner-city estate. Breaking down the cultural barriers between the two is likely to be complex and difficult, and, not just a job for teachers. It is not surprising, given that the culture of Oxbridge in particular is one with which many private school pupils will be more familiar than those from the state sector, that on average pupils from private schools are more likely to apply to it.

Above all, Oxbridge or other research universities will not be for everyone. A child from any background who rejects it should not be seen as automatically having done the wrong thing. It should be remembered that the Sutton Trust exists to promote more state school pupils going to such institutions. While this aim may be laudable in itself, it does suggest quite a one-eyed view of what might be seen to matter in the world,  which could do with being questioned and challenged.

This might seem to be tangential to the subject matter of “education by numbers” and in a way it is. But I think there is a connection here. Once again, numbers are quoted with a force which would suggest an unarguable view of the unfairness of a particular situation. Yet this is only the case given some fairly powerful assumptions, which remain unspoken, such as the view that university is always best, for everyone with a choice. I loved university. But I wouldn’t want to suppose that everyone who rejects it has simply been badly advised.

- Warwick Mansell

No Comments
posted on August 10th, 2009

 

I’m not going to write a huge amount about yesterday’s national results for key stage 2 English, maths and science, (well, actually, it seems I have) but one thought does leap out.

A great deal seems to be being made about a one percentage point drop in the results for English. The proportion of pupils reaching the “expected” level four or above edged down, from 81 to 80 per cent. This is the first time the headline English results have fallen since the introduction of the tests in 1995. It  prompted, I am told, a front page story in London’s Evening Standard and a mention near the top of most other articles.

Michael Gove, Shadow Secretary for Children, Schools and Families, was quoted as saying: “We have seen a historic drop in English results, the brightest students are not being stretched, and the weakest are being failed the most. It is deeply worrying that English results are in decline.”

But it seems to me that it is foolish to try to make any conclusion on the basis of a one percentage point fall from one year to the next in national test results. The data are simply not trustworthy enough to allow one to report confidently that this represents a fall in standards. This is not necessarily the fault of the process by which pass marks are set, which as scientific as it probably can be. It just reflects the reality of, probably, any testing system.

An inquiry for the Government by Sir Jim Rose back in 1999 into the setting of the pass marks “or level thresholds”, in the jargon, for national tests  contained the following interesting insight. It said: “An enormous amount of technical and statistical expertise is brought to bear on designing the tests and making them consistent with the national curriculum standards expected of pupils at the end of each key stage, year-on-year. Nevertheless…there will always be a degree of subjectivity in what is done, for example, to agree the level thresholds, and in the judgements of markers when marking questions.”

Rose then highlighted how, in discussions about where to set the level threshold or pass mark needed to achieve level 3 in English in 1999, there was a disagreement of five marks between the mark at which marking experts believed it should be set, and that suggested by statistical analysis. Eventually, human judgement prevailed. At level four, there was a difference of one mark in the two suggested pass marks.

Rose concluded: “Where such small margins are involved, it becomes obvious that testing is not an exact science. The justification for choosing one ‘pass mark’ over another can be barely discernible.”

Yet, precisely where the pass mark is set can have a large impact on the percentages of pupils achieving any given level or not. When I analysed data from 2005, I found that a move of two marks in the level four threshold would lead to around a three percentage point swing in the proportions reaching that benchmark.

A report on level-setting for KS3 science in 2007, conducted by the OCR exam board for the Qualifications and Curriculum Authority and quoted in a chapter by Paul Newton of Cambridge Assessment, in a book just published, is similarly interesting. It shows how the proportion of pupils gaining a particular level could  have varied by up to five percentage points according to whether it was decided to set the pass mark at the lower end of a confidence interval suggested by statistical modelling, or at the higher.

In other words, as Rose concluded, this is an imprecise science. Attempting to have a national debate around a rise or fall of one percentage point is highly unwise.

It may be that the one percentage point fall represents a drop in national standards. Or it may just be down to the unavoidable imprecision of the level- setting process. We cannot know for sure. And this need not prompt a flinging up of hands and a demand that testing become more accurate, in my view*, but a reconsideration of the weight being placed on test data, not least in informing any consideration of the effectiveness of the Government’s education policies.

This point is to be made before one even gets into whether the tests themselves – painstakingly constructed though they undoubtedly are – represent good measures of the overall quality of our schools, or just the ability of primaries to cram pupils for what is, after all, just a subset of one of the qualities one might want from education: to prepare pupils to perform in a series of time-limited tests in three subjects.

And the fact that the figures overall represent a – if taken at face value – quite startling improvement since the mid-1990s, which has been maintained in recent years – tends to be underplayed. If 80 per cent of pupils are now reaching the standard that used to be attained by the “average” child, is this truly a cause for national hand-wringing? Moreover, the achievement of level 3 or 4 has never been the difference between being able to read and write or do maths that the reports, often encouraged by the Government, suggest. Of course, achieving level 4  comes down to whether or not one has accumulated enough marks in a particular test on a particular day, with one mark sometimes making the difference between level 4 (ie the pupil “can read”) or level 3 (“they cannot”, is the implication). The cut-off is simply not robust enough to support the current high-stakes use of the indicators.  

Overall, if this is how the Government is held accountable, it is a very superficial form of accountability, which does little either to enhance public understanding of what is going on in schools, or to promote genuine improvements.

* Although I would be interested to hear if another testing mechanism could be created which would allow one to be more certain that a one percentage point fall in the numbers achieving the threshold represented a genuine change in the underlying standards of performance.

- Warwick Mansell

No Comments
posted on August 5th, 2009