Degree algorithms create ‘artificial differences’ in final grades

Different rules used to calculate honours classifications could leave a university with double the proportion of first-class degree holders than another institution

January 25, 2018
cardsInPlay = [];
Source: Alamy

Students who achieve the same set of marks are being awarded widely divergent final degree scores, owing to the use of different algorithms by UK universities, an analysis reveals.

The use of varying degree algorithms – the set of rules used to translate module outcomes into a final degree classification – means that an institution could have double the proportion of first-class degree holders than another university with an identical set of student grades, according to a working paper.

The study by David Allen, associate head of department (programmes) at the University of the West of England’s Bristol Business School, draws on a recent report from Universities UK and GuildHE, which surveyed 120 UK institutions (113 of which hold degree-awarding powers) on their use of degree algorithms.

Mr Allen also collected data on the real module marks given to three cohorts of students in a medium-sized degree course at a large English university over the course of their three-year degrees.

In the paper, “Degree algorithms, grade inflation and equity: the UK higher education sector”, he combines the two datasets to investigate the impact that the choice of algorithm would have on the students’ final degree scores, and ultimately degree classifications.

The results show that the proportion of students in the sample who would receive a first-class degree ranged from 16 to 32 per cent, depending on which of six degree algorithms was used (based on five existing algorithms and one grade point average calculation that counts marks from all years equally).

The proportion of students in the sample who received a first or 2:1 ranged from 68 to 82 per cent.


Distribution of degree classifications using different algorithms 2015-17 aggregated

Source: 
David Allen

A separate calculation found that one individual’s final mark ranged from a high 2:1 (66.7 per cent) to a low first (70.7 per cent) across nine different algorithms.

The variation comes from how the average mark of each year of study is weighted and, in particular, whether some module marks are discounted or removed from the calculation.

“The simulation carried out here shows that the varying use of differential weighting and the discounting of module marks creates artificial differences in the degree outcomes between universities,” the study says.

The results also appear to refute the claim made in the UUK/GuildHE study that the use of more than one algorithm across departments within one university “appears to have a limited impact on the overall profile of awards made”.


Differential weightings at UK universities (%)

Year 1 Year 2 Year 3 Number of universities Example of university
0 0 100 8  
0 20 80 6 University of Derby
0 25 75 18 University of BirminghamUniversity of Hertfordshire; University of the West of England
0 30 70 13  
0 33 67 19 University of Manchester; University of Nottingham 
0 40 60 19 University of Kent; University of East Anglia
0 50 50 8 Oxford Brookes University 
10 30 60 4  
11.1 33.3 55.6 2  
11.1 44.4 44.4 1  

Source: UUK/GuildHE survey and David Allen


The findings will add weight to fears about grade inflation at UK institutions and heighten concerns about whether some universities are changing algorithms in order to improve degree outcomes.

A 2015 report from the Higher Education Academy found that almost half of UK universities changed how they calculated their degree classification to ensure that students did not get lower grades on average than those at rival institutions.

However, the UUK/GuildHE survey suggested that the motivations for changes were “more benign” than the HEA suggested, with respondents claiming that they were a means of “refreshing regulations in line with best practice, and to remove inappropriate barriers to student success”.

Mr Allen told Times Higher Education that he was concerned by the lack of equity across the university sector and he suggested that all universities adopt the same algorithm when classifying degree outcomes.

The use of different algorithms is “affecting the life chances of some of these students, which is the real big issue”, he said.

The findings could also impact student choice, if the varying use of algorithms at each university were made public, he said.

“If Oxford Brookes’ students [for example] realised that somehow or [other] they’re not benefiting from some of the rules offered in other universities, they’re going to be mightily pissed off,” he said.

ellie.bothwell@timeshighereducation.com

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Reader's comments (8)

How interesting that David Allen's research mirrors so closely the Student Assessment and Classifications Working Group (SACWG)'s findings in their 1997 work, published in The New Academic as "Honours Classifications: The Need for Transparency". The study processed the confirmed marks from one institution through the classification algorithms of five other universities and concluded that 'in 15% of the cases the student might have been given a different class of degree if his/her results had been obtained elsewhere'. The Northern Universities Consortium (NUCCAT) and SACWG's recent report on progression regulations, "Winning the progression lottery owes more to luck than academic judgement", illustrates that the regulatory disparities David Allen and UUK/GuildHE survey point to extend to the rules governing progression from Level 4 to Level 5. Students might - rightly in my view - ask how can the differences be justified. Harvey Woolf
Thank you for drawing my attention to this early work. Hopefully this time around (20 years later) something will be done about the HE sector's current arrangements ... which as you say are hard to justify (or defend).
University of Birmingham is 25/75, not 20/80 as indicated above.
Thank you for the correction I shall make a change - David Allen
Clerly I selected the wrong set of weightings Birmingham https://www.birmingham.ac.uk/Documents/university/legal/regulations-part7.pdf pages 14-15 Marks from the stages of a programme shall contribute to the classification of the degree in the following proportions: Programmes with modules at levels C to H. Stage 1 - 0 Stage 2 - 25% Stage 3 - 75% Programmes with modules at levels C to M. Stage 1 - 0 Stage 2 - 20% Stage 3 - 80%
Policies affect student (and maybe also staff) behaviour. Comparing the same set of results across different policy regimes may be misleading. Outcomes are endogenous to policy.
Interesting article. I remember being told by the Dean of students at UCL in the 1980's that 'racks and chains' wouldn't make them divulge their methods of calculation for awarding degree classifications! I hope things have improved there now...
Oxford Brookes students have every right to be pissed off. The university management has gone to the dogs in the last 3 or 4 years and the league table position has plummeted so their degrees are already worth less as a result of absolutely shocking senior management.

Have your say

Log in or register to post comments