REF more burdensome than RAE, pro v-cs state

THE straw poll shows efforts to lighten load have backfired

November 28, 2013

Can you hear a prolonged scream reverberating around the corridors of your university’s administrative building? If so, it is likely to be the sound of exhausted, caffeine-addled research office staff putting the final touches to their 2014 research excellence framework submission, due by 29 November.

It wasn’t supposed to be like this.

When Gordon Brown, at the time chancellor of the exchequer, announced in 2006 that the old research assessment exercise was to be scrapped, his intention was for it to be replaced by a metrics-driven approach that would be much less burdensome to administer.

Even after that approach had been rejected by the funding councils as unworkable, they still hoped to ease the burden by, for instance, shortening the template for the environment section and slashing the number of units of assessment from 67 to 36.

However, a straw poll of pro vice-chancellors for research carried out by Times Higher Education last week suggests that, if anything, the labour involved this time round has been greater than in 2008.

According to one Russell Group pro vice-chancellor, the complexity of combining several academic departments into one unit of assessment has helped to make the REF “much more onerous” than the RAE.

The other major factor that has made matters worse is the new impact element – introduced as a sop to the government after the metrics-driven approach was rejected.

Geoff Rodgers, pro vice-chancellor for research at Brunel University, agreed that impact meant that the effort required to prepare his institution’s REF submission had been “substantially greater” than that for the RAE, since “it took some time to understand the detailed requirements”.

However, he added, Brunel “got there eventually” and the resulting case studies “brilliantly” illustrated – to the researchers themselves as well as taxpayers – “the important public benefit” of the research.

Myra Nimmo, pro vice-chancellor for research at Loughborough University – where the effort involved for the REF “has not been less” than in 2008 – pointed out that while most impact case studies had to be written from scratch this time, universities will now begin gathering them “in real time”, which will make future submissions easier.

Some universities have suggested that they will constrain the number of researchers they submit on the basis of how many good case studies they can come up with, but no one THE interviewed had taken that approach.

However, only Professor Nimmo was prepared to say what proportion of staff she envisaged submitting to the REF. This would be somewhere around 85 per cent compared with around 95 per cent in 2008 – although the decline was not strategically driven at either the unit or university level, she said.

All those polled expressed confidence in their submissions.

But the Russell Group pro vice-chancellor said that institutional confidence would be better focused “in the research strategies we have put in place, and [in the belief that] we have retained, hired and supported the very best researchers to deliver them”.

He added: “That being the case…future assessment exercises should perhaps move to a simpler approach which requires every [institution] to submit every researcher rather than encouraging selectivity.”

paul.jump@tsleducation.com

You've reached your article limit

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Reader's comments (2)

And why was a metric-based system deemed 'unworkable'? Because it was rejected by academics. We brought this on ourselves. In the sciences, metrics predicted RAE outcomes pretty accurately. Yet when I have made this point, (eg http://occamstypewriter.org/athenedonald/2013/08/15/why-i-cant-write-anything-funny-about-the-ref/#comment-114639) the idea of metrics is greeted with horror. The humanities may need a different solution, but in sciences, metrics could provide a far more cost-effective and objective method for evaluating departments - but it seems it is just too simple for many academics to accept.
I wholly agree with what Dorothy Bishop writes about metrics, but I would also add that I am not persuaded that, in this regard, the humanities are any different to the natural and social sciences. The real issue is that HEFCE has opted for a less than transparent method of assessment that circumvents international scrutiny and that does nothing to address the concerns about British universities which Bahram Bekhradnia raises is this issue of THE.

Have your say

Log in or register to post comments

Most Commented

Lady Margaret Hall, Oxford will host a homeopathy conference next month

Charity says Lady Margaret Hall, Oxford is ‘naive’ to hire out its premises for event

women leapfrog. Vintage

Robert MacIntosh and Kevin O’Gorman offer advice on climbing the career ladder

Woman pulling blind down over an eye
Liz Morrish reflects on why she chose to tackle the failings of the neoliberal academy from the outside
White cliffs of Dover

From Australia to Singapore, David Matthews and John Elmes weigh the pros and cons of likely destinations

Michael Parkin illustration (9 March 2017)

Cramming study into the shortest possible time will impoverish the student experience and drive an even greater wedge between research-enabled permanent staff and the growing underclass of flexible teaching staff, says Tom Cutterham