Measures of student “learning gain” are too context-specific to be used to compare university performance on a national level, researchers have concluded.
The now-defunct Higher Education Funding Council for England spent £4 million on 13 pilots aimed at finding ways of tracking the improvement in skills and competencies made by students during their degrees, with one eye on creating a new method of rating universities’ teaching standards.
When the pilots involving 70 providers were announced in 2015, Jo Johnson, who was then the universities minister, said that they would “help assess teaching quality and excellence”, while Madeleine Atkins, Hefce’s chief executive, said that the projects had “the potential to support measurement and indicators at institutional and even national level”.
However, a conference organised by one of the pilot groups, led by the University of Warwick and involving 17 other Russell Group institutions, heard that its experiments had found that learning gain was not applicable nationally – for example, as a metric in the teaching excellence framework.
This comes shortly after Hefce’s successor organisation, the Office for Students, scrapped a separate pilot that examined whether standardised tests could be used to measure learning gain in England. The project struggled to recruit enough students to take the exams.
Sonia Ilie, a senior research fellow at the University of Cambridge’s Faculty of Education, told the conference convened by the Learning and Employability Gain Assessment Community (Legacy) that early analysis of a tool being developed at Cambridge demonstrated that it was able to discern changes in students’ skills, abilities and competences.
The Cambridge exercise uses a mix of survey and test questions to measure students’ cognitive, affective, metacognitive and socio-communicative development.
Dr Ilie said that her team had concluded that although the tool could be used effectively within an institution to provide evidence for the pedagogical effectiveness of different courses, it could not be used for institutional-level comparisons.
“Outcomes show variation across time and between students,” she said. “The gains are not linear, they ebb and flow, and are subject to a whole range of contextual differentiation factors.”
Many of the learning gain tools presented at the conference involved self-reporting, making context even more important, researchers explained.
Jan Vermunt, professor of education at Cambridge, said that the point of measuring learning gain was “not to compare universities – I don’t see why we’d want to do that – but to improve teaching”.
Eluned Jones, director of student employability at the University of Birmingham, added that learning gain was “a useful instrument for institutions to use individually, but it is not useful on a mass scale”.
But former universities minister Lord Willetts, who also spoke at the conference, urged academics to consider using their metrics in a national context “however crude they feel the measure may be, because otherwise others will define it for them”.
Lord Willetts pointed out that policymakers were already looking at ways to measure learning gain across institutions and said that it would be better if researchers set the agenda themselves.
“Academics working on learning gain should publish institutional results, however imperfect, and explain their context,” he said. “The UK is currently measuring university impact [in the TEF] by what students are earning six months after graduation, and that is a terrible idea,” he added.
Any measures “we use on learning gain just have to be better than the current metrics”, Lord Willetts said.