‘Inconsistency’ fears after REF retreat on environment metrics

Clear guidance needed on how narrative-based submissions will be graded to avoid accusations that judgements are overly subjective, say experts

Published on
January 5, 2026
Last updated
January 5, 2026
Course volunteers update the main leader board during a golf tournament. To illustrate ‘inconsistency’ fears after REF retreat on environment metrics.
Source: Jason Arthurs/Raleigh News & Observer via Getty Images

Limited use of metrics in the UK’s Research Excellence Framework (REF)’s revamped environment section has raised concerns about how it will be consistently evaluated.

There are fears the newly renamed “strategy, people and research environment” (SPRE) section will suffer from the same criticisms of its predecessor, “people, culture and environment” (PCE), which some felt lacked robustness, given institutions were graded on lengthy narrative-based submissions.

It was hoped a pilot examining potential “indicators” covering areas such as staff training, open access and diversity might produce metrics to allow a streamlined and objective assessment.

But a report on the pilot published last month found data collection proved difficult for those participating. Some data – such as that collected for the England-only Knowledge Exchange Framework – was not available for all institutions, while collecting data for other metrics would have imposed a “significant burden” on institutions and would be “impossible to collect retrospectively”, it found.

ADVERTISEMENT

In addition, the relevant population for many of the research indicators was “unclear” and “might not be appropriate for all disciplines”, meaning they could “not be applied evenly across the entire exercise”.

Recommending a “much tighter framework for a full-scale REF exercise”, the report put forward only a handful of metrics for the PCE section on which there was “reasonable agreement” among panellists who assessed the pilot’s submissions. These include: longitudinal data on research staff based on sex, ethnicity and disability; pay gaps for each of these protected characteristics; numbers for newly employed early career researchers, technical support staff and staff on fixed term contracts.

ADVERTISEMENT

Data on how many times shared datasets are accessed or the share of research outputs published open access could also be included, adds the report.

If REF panels will again be asked to examine narrative statements supported by only a handful of data points, more thought will need to be given to how to avoid the charge that assessments might be seen “overly subjective”, said Simon Green, pro vice-chancellor (research and knowledge exchange) at the University of Salford, a panel member for the pilot’s effort to examine institutional level indicators.

“Consistency of evaluation is key. With something as broad as SPRE, it is normal that panel members will place different emphasis on the different elements within each section,” he said, adding that panels will need to develop working methods to mitigate this.

Green said that while REF organisers had clarified that context, as opposed to absolute performance, will matter in the assessment of institution-level statements, “it would be helpful to see this extended to unit-level statements, to avoid the risk of inconsistent assessment”.

ADVERTISEMENT

He said he felt there was “a growing recognition that assessing SPRE requires specialist understanding” that was “different to evaluating outputs and impact”.

This might mean that specific training is needed for panel members, he suggested, or organisers may need to question “whether the same panel members should be assessing all three elements”.

John Womersley, former executive chair of the Science and Technology Facilities Council, said the shift away from the greater use of metrics in PCE could have reflected the realisation that this approach would favour the “entrenched hierarchy” of research-intensive universities. According to the pilot report, larger universities were far more likely to be judged 4* for research environment based on the data collected.

“Narratives feel more egalitarian”, said Womersley, adding that “the cynic might note that those pushing what one might call a ‘social justice’ agenda have always tried to steer assessments away from pure metrics, like counting publications in high-impact journals and so on”.

ADVERTISEMENT

One of the strongest recommendations of the pilot’s panels is that environment statements should contain “self-reflection” and a focus on “continuous improvement” if they are to be deemed 4*.

For Green, this element is crucial. “One aspect on which clarity will be particularly important is that SPRE statements, both at institutional and unit levels, should be based on honest self-assessment, and not just present a highly-polished and selective picture of the submission in question.”

ADVERTISEMENT

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Related universities

Reader's comments (1)

This is becoming more and more ridiculous. One gets the impression that they don't really know what they are doing despite the enormous amounts of time and resource they have expended on this and they are making it up as they go along. This is so late in the day to be messing around with what are fundamentals.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT