All congratulations to the research excellence framework winners of course, but the cost (human and financial) of the whole exercise has been questioned by many.
The article “The (predicted) results for the 2014 REF are in” (News, November) showed the close match of h-indices with the results, but many continue to argue that the peer assessment element in the REF remains crucial. I believe this to be false and, moreover, I think there are other reliable automated methods that could be brought to bear.
I well remember as a past research assessment computer science panel member being faced with more than a thousand research outputs to read in a few months, many of them incomprehensible to me because of the extraordinary range of work included (they were really about physics or genetics). I came to the conclusion that, despite the valiant efforts of the panel to create a defensible methodology of assessment, this brute fact of volume and diversity made it impossible: no matter what they thought they were doing, panel members when assigning marks must have fallen back on preconceived intuitions of reputation, personal and institutional. Even if I am right, that does not make the REF worthless, but it strengthens the case for cheaper automated methods.
The work of Thomas Landauer in the US and his technique for the automated marking of student essays has now become a widely used commercial product that produces marks almost indistinguishable from those awarded by human markers. In addition, there are now in my field of natural language processing by computer a variety of reliable techniques for assessing a given document for similarity to or difference from a human-graded exemplar.
Some 15 years ago, a University of Sheffield colleague and I published a satirical piece (in the newsletter of a learned society) offering to repeat the nationwide research assessment not for millions of pounds but for a fiver, by using these methods. It was intended as a joke, but it now looks merely prescient.
Emeritus professor of artificial intelligence
University of Sheffield
A project, commissioned by the Higher Education Funding Council for England, is exploring “learning gain”: ways of measuring how far individual students have come on their learning journeys. For some, huge gains have been achieved even if they end up with what for others may seem to be mediocre final grades.
Why can’t this be applied to research output? What I find utterly deplorable in the REF results for 2014 is the fixation on ranking as a proxy for quality when it comes to institutional performance. What the tables published by Times Higher Education mask is the incredible “research gain” that many institutions have made in the past six years. Where research is a relatively new venture and research cultures are emerging tentatively, relatively low positions on the REF ranking are unhelpful. Perhaps “research gain” should be included in 2020.
Senior researcher, Social Research and Evaluation Unit, Faculty of Education, Law and Social Sciences
Birmingham City University
There is a great deal to be said about REF outcomes (“Check your coordinates”, Features, 18/25 December). One point may be overlooked because it is so obvious. The institutions that did well have all invested heavily in high-quality staff. Those staff were recruited from all over the world and (in my own subpanel area) particularly from across Europe and East Asia. Our universities are one of our most successful industries. An open immigration policy is essential to the future excellence of UK research.
Chair, social work and social policy subpanel REF 2014 and research assessment exercise 2008
University of Kent
In the past few days there has been a frenzy of REF-bashing, and no doubt it will continue. I fully encourage it, for I am making notes of the various salient points to serve as ammunition in the inevitable discussion on “career and progress” that I, as an academic, will have with my line managers. However, in the interest of balance, and to provide me with further means to argue, is there anybody out there who, with a straight face and in plain English, can defend the REF, citing and testifying to any positives that it has brought to the academic community (as opposed to the government)?
School of Clinical and Experimental Medicine
University of Birmingham