UK universities must ensure that their policies on borderline scores do not in effect lower the thresholds for degree classifications, sector bodies say.
In a new report, Universities UK and GuildHE call for more transparency around degree algorithms – the set of rules that institutions follow to determine a student’s final degree classification.
The study, which is based on a survey of 120 institutions across the UK, found that the design of such algorithms has “moved toward a more transparent rules-based approach with a reduction in the discretion of examination boards”.
Of the 112 respondents who answered a question on how their institution made provisions for cases where students achieved scores that were on the borderline between two degree classifications, 47 said that they followed an automatic approach based on an algorithm and just 27 said that they relied on the discretion of an academic board.
But the report, Understanding Degree Algorithms, issues a note of caution around the design of such algorithms for borderline cases – for example, where a student averages 69 per cent, just short of a first and at the top end of a 2:1.
“Particular care should be taken that the design of rules on borderline cases do not have the inadvertent effect of lowering the effective threshold for a degree classification across the student population,” it says.
While the report says that a consistent approach is important, it adds that there “would be a risk to the confidence of sector stakeholders if an institution were simply to upgrade all students who fall into a borderline or classification boundary”.
“In effect, this practice would introduce a different set of final degree classification boundaries and undermine both conventional practice and confidence in sector standards,” the report says. “Such practice, if it exists, is not acceptable.”
Where institutions discount lower grades, particularly in the initial classification and for borderline cases, upper marks should also be discounted, it adds.
The report argues that more transparency around the design of degree algorithms would “aid confidence” in the approach adopted by institutions and would “deter poor practice that might undermine confidence in standards”.
But it challenges the notion that universities adjust their degree algorithms to ensure that their students fare better than learners at competitor institutions.
Of the 98 respondents who answered a question on why their institution made changes to their degree algorithm, 27 cited standardisation within their university and 22 cited pedagogical grounds.
Only 14 indicated that they had made adjustments to align practice with competitors or with the sector more widely; when questioned further, these respondents said that they looked to other institutions with a similar profile of students as a means of “refreshing regulations in line with best practice, and to remove inappropriate barriers to student success”.
These reasons suggest that the “motivations for alignment are more benign than previous coverage of this issue has suggested”, according to the study.
A 2015 report from the Higher Education Academy found that almost half of UK universities changed how they calculated their degree classification to ensure that students did not get lower grades on average than those at rival institutions.
The UUK/GuildHE study also found that the adoption of grade-point average (GPA) has been slow and that there was “little appetite” for future uptake. Three-quarters of respondents (77 of 106) said that their institution was not planning to introduce GPA.