Panels used to distribute research funds of about £2 billion a year in the UK are dominated by senior male academics from a limited number of pre-92 universities, a major study suggests.
Drawing on interviews with managers, impact assessors, main panel and subpanel members from the 2014 research excellence framework, academics at Goldsmiths, University of London found that many of those involved in the last sector-wide research audit had concerns about the diversity of assessment panels.
One interviewee quoted in the report explains how “very few of them would have been under 50 years old”, while another interview said that the lack of younger academics meant that REF panels were “arguably at one remove from some of the new developments” in their field, according to the report, titled The Challenges of Research Assessment: A Qualitative Analysis of REF2014.
The age profile of one panel meant that it “look[ed] far more to the past than the future”, which meant the REF “does the opposite of forwarding and rewarding new work”, says another interviewee.
“Concerns about the age of some subpanels were raised by interviewees, who felt reviewers might be one step removed from the cutting edge of a discipline,” Daniel Neyland, the report’s co-author, told Times Higher Education.
“Some interviewees also felt new universities were not adequately represented on panels, while the geographical spread of reviewers was also an issue,” said Professor Neyland, who undertook dozens of interviews with REF insiders alongside fellow Goldsmiths sociologist Sveta Milyaeva to produce more than 1,000 pages of interview transcripts.
The use of learned societies to recommend REF panellists also led to concerns that the age and gender profile of panels was “too narrow” – a practice that promotes more traditional approaches to certain disciplines, Professor Neyland added.
In the study, one panel chair describes how she felt “mildly intimidated” as the only woman on her subpanel, stating that there were no other women in her group. Another says that it was “difficult in [this subject] to get the female representation...because there are so few women”.
The Higher Education Funding Council for England, which oversaw the REF, has previously said that it had been “concerned about the representativeness of the membership” of panels. Efforts to find more under-represented groups in the second recruitment round had left Hefce “disappointed that little progress was made”, it explains in an equality and diversity report published in January 2015.
Other issues were also raised in the new 88-page study, which is the first major in-depth qualitative analysis of the 2014 REF.
Several academics spoke positively about the exercise, which they believed drove up the overall quality of research, distributed research funds on a fair basis and demonstrated the impact of research beyond universities.
Many recognised that the system was imperfect, but, as one interviewee says: “It is probably like the Churchill statement about democracy. It’s the worst system except for all the others.”
However, others were exasperated by the REF’s gargantuan scale, with one stating that the “whole exercise had grown to a size, and a complexity, and a cost that far outweighed its usefulness as an allocation device or as a ranking device”.
Some questioned whether the REF could really be described as a peer review exercise, given the volume of research that panellists are asked to consider. One interviewee says that he managed to read nearly 900 papers in a week, while another read about 800, although many say that their work was spread over several months.
“The REF isn’t peer review…they’re judgements that are skimming the surface of work that’s taken years to produce,” one interviewee says.
Another criticised the “spurious false precision” of the results, in which institutions were often separated in league tables by “two or three decimal points”. Results should be tabulated by clusters, rather than a numerical table, he adds.
The report also highlights some of the “disturbing” game-playing used by institutions, such as employing senior academics on short-term fractional contracts to improve performance. All researchers are to be submitted to the next exercise in 2021 to reduce game-playing after the recommendations of an independent review led by Lord Stern.
“A number of people have a one-day-a-week contract at five institutions, so they can be scored in all of them,” recalls one interviewee, although another claims that panels “kicked back at the groups that did that”.
“We…more or less told them that they were playing the system and that we weren’t stupid,” he adds. “It didn’t seem to happen so much this time.”