A metrics-based “mid-term review” could allow the time interval between research excellence framework assessments to be extended to once a decade, the chair of a new independent review of the use of metrics in research assessment has suggested.
James Wilsdon, professor of science and democracy at the University of Sussex, told Times Higher Education that such a suggestion went slightly beyond the terms of reference of the review, which was launched on 3 April by the Higher Education Funding Council for England.
“But inevitably these are the sorts of questions you start to ask once you have properly interrogated the metrics.
“Given that – as everyone accepts – [submitting to] the REF is a fairly time-consuming exercise, is there a way of using metrics to do a lighter-touch mid-term review, and then do a full-blown panel-based review every 10 years instead of every six?” he asked.
In 2008 and 2009, Hefce ran a pilot exercise on metrics-based research assessment, which Gordon Brown, then prime minister, favoured. It concluded that metrics were “not sufficiently robust to be used…as a primary indicator of quality”.
However, David Willetts, the universities and science minister, was set to announce at a Universities UK conference on universities’ contribution to economic growth that he has asked the funder to revisit the issue.
Professor Wilsdon said the review would consider more than just the REF and would not produce specific recommendations for the next exercise – likely to be in 2020 – because it would be “premature” to do so before Hefce’s formal evaluations of the 2014 exercise.
But he aimed to make some “fairly intensive efforts” to learn from the experience of the 2014 panels, some of which are using metrics to inform their judgements, following the conclusion of their work in December. He hoped that publication of the review in spring 2015 would be early enough for it to inform the shape of the next REF.
He said he and his “first-rate” steering committee would approach the issues with an open mind and consult widely.
One of the most significant developments in recent years was the rise of alt-metrics, which measure factors such as the number of mentions a paper receives in social media. He was “well aware” of the strong criticisms they had attracted in some quarters, but he was open to the idea that alt-metrics could be used to supplement or even replace the case studies used by the 2014 REF to assess the wider impact of research since they potentially enhanced the “ability to track and measure impact in real time”.
The review will also examine the use of metrics by universities. A number of institutions have been criticised recently for relying heavily on metrics to manage performance or to select people to submit to the REF.
Professor Wilsdon hoped to “come up with practical, useful guidelines about what are appropriate and inappropriate uses of metrics in an institutional context”.
“The ideal is that you find a healthy and sensible balance of metrics- and non-metrics-based systems that can be used in a much more realtime way to ensure you don’t have an 18-month lead-up to the REF that is very time-consuming and burdensome on academic culture and practice,” he said.