All research-active academics should be entered for the next research excellence framework, and the work of academics who have moved should be claimed by the institution where it was carried out, a government-commissioned review of the exercise has recommended.
The Independent Review of the Research Excellence Framework, chaired by Lord Stern, president of the British Academy, says that requiring everyone to be submitted will eliminate the “stigma” associated with non-submission, and will support “the accurate description of university research”.
The report, Building on Success and Learning from Experience, published on 28 July, says that the number of outputs each unit of assessment should submit should be a function of its staff numbers.
However, the precise number should be calculated to ensure that the burden on assessment panels does not increase significantly, and could be less than two outputs per researcher.
The number of outputs that could be submitted by individual researchers should vary, perhaps between zero and six, the report suggests. This will “ensure that academics with a limited publication record are not required to have a full set of outputs; it will reduce the burden of demonstrating individuals’ special circumstances, and promote inter-sector mobility”, the report says.
“Reducing the focus on individual members of staff and instead painting a picture of the submitting unit as a whole will reduce the current consequences for morale of non-submission. It could encourage cohesiveness and productivity within the submitting unit,” the report adds.
The report also makes the important recommendation that outputs should be submitted by the institution at which it was “demonstrably generated”, even if the individual has since moved on, potentially taking the heat out of the traditional pre-REF “transfer market”.
“Disincentivising short-term and narrowly-motivated movement across the sector, whilst still incentivising long-term investment in people will benefit UK research and should also encourage greater collaboration across the system,” the report says.
It expresses concern that “the mechanistic linkages made in REF2014 between specific outputs and eventual (often very specific) impact unduly restricted the ability of institutions to submit examples of where an individual or group’s research and expertise had led to impact, but where that impact could not sensibly be traced back specifically to particular research outputs”.
For this reason, it recommends that the definition of impact be “broadened and deepened” to include impacts on “public engagement and understanding”, on “cultural life” and – so to better align the REF with the teaching excellence framework (TEF) – on curricula and pedagogy.
The report also recommends that “options are explored for linking case studies to research activity and a body of work, as well as to a broad range of research outputs”, rather than specific papers or monographs.
In addition, universities should have more flexibility over how many impact case studies to submit from each unit of assessment, the report says.
This will allow institutions to “demonstrate their strengths more effectively”. It also says that all institutions should be required to submit some “institutional-level” impact case studies “which arise from multi- and inter-disciplinary and collaborative work”, to be assessed by a special panel.
There should also be an institutional-level environment statement, aimed at providing “a more holistic view of the [higher education institution], allowing the REF to capture institution-wide strategic objectives and cross-cutting structures and initiatives”.
“It would facilitate a wider assessment of the institution’s contribution to the delivery on key agendas such as asset sharing and collaboration,” the report says.
The report adds that the weighting for outputs in the REF should stay at 65 per cent, and impact should continue to account for “not…less than 20 per cent”.
It also recommends that assessment continue to be done primarily on the basis of peer review, but “metrics should be provided to support panel members in their assessment, and panels should be transparent about their use”.