The most radical idea being suggested to Lord Stern regarding his review of the research excellence framework appears to be that institutions should be required to enter all their academics. The suggestion, endorsed by the University of Cambridge, is intended to prevent the “gaming” around submission numbers that many consider to be the REF’s biggest flaw.
But, in my view, the review needs to address a bigger problem. The REF, in common with other global systems of academic evaluation, rewards past performance on the assumption that it predicts future success. This approach is so deeply rooted in academic culture that many believe no alternative is possible. However, rewarding performance is problematic, regardless of whether evaluations emphasise quantity, quality or some form of academic impact, such as citations.
An example of a country that emphasises quantity is Norway. Total resources for research in the country increased between 2004 and 2012, with allocation partly based on performance criteria. Publication output has duly grown – but quality and impact have stagnated. Meanwhile, if the concerns about gaming in the REF are justified, the UK’s widely trumpeted quality improvements since it was introduced may be equally illusory.
In short, the evidence indicates that rewarding performance does not necessarily lead to better science: it merely provides institutions with incentives to artificially inflate their metrics. We need a new model of research evaluation that rewards potential instead. Consider three scenarios, in which A and B are two research units competing for funding:
(1) A and B currently produce research of similar quality. A produces relatively little, but its quality is improving every year. B produces significantly more than A, but quality is decreasing.
(2) B scores marginally better than A (on whatever output measure), but uses significantly more resources than A.
(3) A and B perform similarly on all measures on average, but A’s performance has improved over the assessment period while B’s has worsened.
The current system would reward B’s declining performance in (1), B’s inefficient use of resources in (2), and would allocate both units the same amount in (3). This is because allocations do not trace how quantity, quality, impact and expenditure change over time. Doing this would give a much clearer picture of likely future performance, and would therefore be a much more sensible mechanism by which to distribute future research funding. In all three cases above, A’s potential for further growth would be rewarded.
By divesting from declining units, the new system would allocate a larger share of resources to those with the potential to grow and produce high-quality research, thus promoting the development of novel research programmes. The new system might decide not to fund units that have reached a “productivity ceiling” if it was clear that further investments would yield, at best, diminishing returns. Units that continually produce at “ceiling levels” might still be worth supporting if their results were qualitatively superior to those of their competitors, but not if quality or quantity were decreasing significantly.
The new system would not be immune from gaming, but it would be less prone to it. Manufacturing the signs of potential is intrinsically more difficult than inflating output. Would institutions withhold high-quality publications until towards the end of an assessment cycle, to signal productivity growth? There would be strong deterrents to doing that, especially in the sciences: scholars would never accept the risk of being scooped by others. Citation impact would not only be delayed by that but in effect lost, because most citations would go to the first publication to appear. Therefore, it would be a self-defeating strategy for institutions too.
It is very hard to say what the effect of such a radical new system would be on the existing distribution of research funding. For this reason, the current winners would surely lobby hard against it. But for any country truly serious about driving ongoing improvements in research regardless of the age and prestige of the institution in which it is carried out, my idea may just be worth considering.
Giosuè Baggio is associate professor in the department of language and literature at the Norwegian University of Science and Technology in Trondheim.