If universities want top research, they must allow people to fail

The new REF rules allow greater scope to aim for the (four) stars. But who will embrace the risk of crashing to Earth, asks Matthew Flinders

July 6, 2023
Garmin-Sharp team rider Andrew Talansky of the U.S. crashes during  the Tour de France cycling race
Source: Reuters

“I’ve published over 200 peer-reviewed research articles,” a candidate recently announced to a professorial appointment panel I was sitting on, “and I have never had a manuscript rejected.”

This boast was undoubtedly designed to appeal to the deans and pro vice-chancellors alongside me – who would no doubt be handed responsibility for their UK university’s REF submission in due course.

But I was less impressed. “Then you’re clearly not trying hard enough,” I replied, the words slipping out of my mouth at a volume that surprised me as much as everyone else.

As the external member of the panel, I was to some extent invited to bring an independent perspective, but there was something about this claim to publishing power that particularly rankled.

As I was invited to explain to the crestfallen candidate, the higher you pitch your intellectual ambitions, the higher your chance of falling on your face. Publishing hundreds of papers without ever being rejected reflects a willingness to play it safe and work within a conventional idiom.

I was reminded of this incident as I read through the “high level” changes recently announced to the next Research Excellence Framework, due in 2028. The Future Research Assessment Programme report promises a welcome shift of focus from individuals and their outputs to departmental cultures, outcomes and contributions to disciplines and society, with expanded definitions of research excellence and research impact. What’s interesting, however, is that the main topic of the REF-related conversations I hear as I visit universities up and down the UK has shifted back from impact case studies (an object of terror when first required in 2014) to individual outputs: specifically, the production of 4* (“world-leading”) publications.

The issue with this is the same one that I raised in that professorial interview: that the search for genuinely 4* outputs demands a willingness to risk being graded 1*.

In the social sciences, for instance, 4* publications usually take one of two forms. The first is the presentation of major new datasets or large research investments that offer a substantive weight of new knowledge. But projects such as the British Election Study or the British Social Attitudes Survey involve relatively few academics.

For the majority, the 4* opportunity lies with the destruction and recalibration of those “dangerous self-evident truths” that the academy too often accepts, many of which ultimately turn out to be wrong. These papers are likely to become the intellectual reference points around which subsequent studies and theories orientate themselves, but their path to publication can often be fraught and their reception is uncertain. They will be reviewed and assessed almost inevitably by people whose career has been built on perpetuating the orthodoxy.

Witness the barriers experienced by the “positive public administration” group in promoting a new perspective that focuses on “systematically studying the successes and positive contributions of government”. Rejected by a number of “leading” journals, these internationally recognised scholars were forced to self-publish their 2019 manifesto paper, until a newly established journal offered to publish a similar paper in 2021.

Joshua Gans and George Shepherd’s 1994 paper, “How Are the Mighty Fallen”, explores the prevalence of rejection among leading economists, including 15 Nobel prizewinners. Established academic cliques will push back against anyone who genuinely attempts to push the boundaries.

In the social sciences, these challenges are augmented by the existence of dominant and highly political normative values that will generally guarantee the rejection of manuscripts promoting different ideological positions. The launch of the Journal of Controversial Ideas in 2021 with an option for anonymous authoring reflects the scale of this challenge.

So if universities really want to support their staff to publish more 4* outputs then they also need to be willing to let them fail – at least in publishing terms. Indeed, they may need to encourage them to fail; many academics who have been socialised into a “publish or perish” culture may struggle to cope with the requirement to move from the mechanical production of 3* outputs to the riskier aim of 4* work.

The REF reforms create space for failure by moving away from the assessment of individuals – but will performance reviews and promotion panels respond accordingly? An example of how hard it is for bureaucracies to truly embrace risk is provided by UK Research and Innovation’s recent call for “moonshot” projects. Its requirement that an application “be specific and well-defined in what it sets out to achieve, with a clear timeframe for completion”, suggests a failure to understand the non-linearity of structured serendipity.

As the recent Nurse Review of the UK research, development and innovation landscape suggested, new structures might be needed to allow researchers to take bigger leaps. The creation in 2022 of the Advanced Research and Invention Agency (Aria) is an interesting case in point. This is designed “to make bold bets” in STEM – but where do the social sciences, arts and humanities go to do the same?

Some of the candidates for that professorial position had great ideas and a willingness to fail. But you’ll be unsurprised to learn that they were rejected. The publishing powerhouse was duly appointed.

Matthew Flinders is professor of politics at the University of Sheffield.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Reader's comments (4)

OK, but I thought that the REF only assessed whole units of assessment, so individual academics' scores for their outputs aren't published. In which case how does anyone have any idea what star rating they are aiming for? Frankly I neither know nor care whether my publications were rated 1 or 4, as I don't want my priorities to be determined by meaningless league tables, and don't want career advancement.
The REF reforms create space for failure by moving away from the assessment of individuals - Outputs and impact case studies are individual (or groups of individuals) products. The REF may not explicitly release the individual scores but they do assess each individual output. Internal REFs often determine progression and promotion. The most dangerous thing about the REF is that the individual scores are not released, which means no one can ever cross-check how the internal evaluations compare to the REF evaluation. This means not being given credit where credit is due (e.g. originality, significance and rigour are 4* but the paper appears in a 3* journal) or vice-versa. Individual REF scores must be released so that individuals can learn from the feedback and improve.
'Learn from the feedback and improve' = is what the journal refereeing process already does, done by actual experts. No need for the REF - still less internal fake REFs which are even more meaningless, especially as they sometimes ask you to rate your own work! However if you don't want promotion, internal REFs are easy to ignore and disengage from, by rating your own work as around 1 and getting back to actual research. We definitely don't need individual scores released for the real REF, because then there would be a risk of having to take the whole REF farce seriously.
If we are being evaluated we need to know how we fared. It is as simple as that. Whether such an evaluation itself should exist (ie should we have a REF) is a different question.