One-size-fits-all A-level system is failing students and universities

More grade boundaries and tiered exams that challenge top students would be welcomed by universities, says Mary Curnock Cook

八月 15, 2021
Marking exams
Source: iStock

What happened in this and last year’s GCSE and A-level grading is not grade inflation. Grade inflation is what happens when exam boards have variable marking quality/consistency, or lower grade boundaries – there were no exams and no exam boards marking papers this year. 

Nevertheless, there has been a marked increase in higher grades.  Some (or even most) of this can be attributed to the fact that teachers were marking to grade or performance criteria supplied by the exam boards. That meant this year’s grades were entirely criterion-referenced – that is, students were assessed against a standard without reference to the performance of others being assessed.

The national exam series up until 2010 were mainly criterion-referenced, giving rise to accusations of “grade inflation”, not least from universities finding it difficult to select from multiple students holding high grades. From 2010 onwards, the exams regulator, Ofqual, deployed its “comparable outcomes” policy, which introduced some norm-referencing (looking at performance and relative performance) to exam grading, to smooth the impacts of reforms to GCSEs and A levels and to ensure comparability across different exam boards and cohorts.

You can see the change of policy quite clearly on this chart from Mark Corver, my former colleague at Ucas and founder of DataHE:

This also plots 2020 and 2021 A-level grade points, showing quite convincingly that these closely match the pre-2010 trajectory. In other words, teachers have probably done quite a good job of matching students against the A-level criteria. 

The huge increase in average grades is the result of a combination of non-standard assessment/marking and some optimism bias compensating students for what might have been, but mostly because there was no statistical moderation of the grades. The problems of trying to use an algorithm to do this artificially in 2020 across a non-uniform process are well documented.

The big question is what happens next? The appetite for a return to the familiar single-day snapshot assessments that are exams as we know them is high – public trust and confidence in this method is strong even while educators bemoan the impacts on individuals who might perform badly on the big day, and the exam technique drilling that prepares students for the test.

While a return to exams in summer 2022 seems inevitable, it remains to be seen what smoothing effects will be deployed (and for how many cycles) to combat unfairness between cohorts and to acknowledge the learning lost during the pandemic.  Reducing the examined content and increasing visibility of items likely to be tested have been widely mooted. The regulator’s approach to comparable outcomes will be closely watched.

Others have trailed a possible switch to a number grade system for A levels, like the newish 1-9 grades used for GCSEs, but it seems unlikely that this change can be made for the 2022 series given that teachers will need to be predicting grades for their students when the Ucas service opens for university applications in a few weeks’ time.

I first wrote about using a numbering system for A-level grades in Times Higher  Education back in 2012. “Is it time to move from A*-E grading to a number-based scale, say 1-10, with 10 the highest? As well as leaving the currency of the current grades intact, this has the advantage of creating a finer scale for selection for competitive higher education courses and a smaller “discount” for near-miss offers,” I wrote back then, anticipating the demographic downturn that would change the supply-demand balance in admissions over the following years.

Now in the wake of the pandemic disruption, a change to the grading taxonomy would provide a reset for the currency of grades without the pain of reforming curriculum and assessment approaches. Ten grades are probably more than needed – GCSEs cover two qualification levels (Level 1 and Level 2) so greater bandwidth is required – and more grades create more grade boundaries around which knife-edge marking decisions can be life-changing for candidates.

On the other hand, universities would welcome finer-grained distinctions, particularly at the top, when a period of demand outstripping supply (and possibly student number caps and minimum entry requirements) will change the selective admissions dynamic.

The more interesting opportunity that a numbered grade system would present is a potential move to single-level tests – in which a candidate could sit an exam for, say, A level Grade 8 mathematics, rather than a paper designed to accommodate a range of grade performances (which is, incidentally, particularly difficult for mathematics). 

This approach has been widely used and accepted in music exams. Thinking of assessment as a stepped and when-ready process across the whole 14-19 education phase would be another iteration of this approach, especially if digital tests were available.

I’ve never supported the scrap-GCSEs narrative but allowing students to take grade exams at the appropriate level at 16 if they need this for progression, or to continue their studies and take higher grades at 17 or 18 to support application for higher-level studies makes a lot of sense.

Stage-not-age based education would, admittedly, cause a massive disruption to the organisation of secondary education, but it might support a better, fairer and more motivating experience for students who would be taught with other students at the same level rather than in multilevel classes. 

Single-level exams would offer better quality questions, and a more accurate assessment against the standard when even Ofqual admits that current exam marking is only accurate to within one grade either way. Universities might welcome this for admissions purposes. 

Discuss, as they say.

Mary Curnock Cook is former chief executive of Ucas.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (1)

If I might take up the invitation to discuss please... There is another problem-to-solve too. As Ofqual's then Chief Regulator, Dame Glenys Stacey, admitted to the Select Committee on 2 September 2020, exam grades are only "reliable to one grade either way". Or, expressed in rather different language, on average across all subjects, about 1 grade in every 4 is wrong (with significant variability by subject and by mark within subject) , as explicitly shown here: https://rethinkingassessment.com/rethinking-blogs/just-how-reliable-are-exam-grades/. Furthermore, introducing a greater number of grades necessarily implies a narrowing of the average grade width. This makes grades even more unreliable, as acknowledged on page 21 of Ofqual's November 2016 report, Marking Consistency Metrics: "Thus, the wider the grade boundary locations, the greater the probability of candidates receiving the definitive grade. This is a very important point: the design of an assessment might be as important as marking consistency in securing the ‘true’ grade for candidates." (https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/681625/Marking_consistency_metrics_-_November_2016.pdf) There is no doubt that a 'reset' is required. But simply to change the grading structure to obscure comparisons between post-2022 and pre-2019 is missing an ideal opportunity - to change the policy by which a candidate's assessment is fairly determined from necessarily 'fuzzy' marks so that the assessment is no longer "reliable to one grade either way" but "fully reliable and trustworthy, full stop". (https://www.sixthformcolleges.org/1412/blog-6/post/31/exam-grades-can-never-be-accurate-but-they-can-be-reliable)