Source: James Fryer
Each year, a department head would note which of his unit’s papers no longer counted and which did, then update the submission accordingly
I am one of the dying breed who were professors at the time of the first “research selectivity exercise” in 1986 and are still plying their trade. So I have witnessed at first hand the evolution of research assessment from the nosy anticipation of the early days to the monolithic research excellence framework, whose results are published today.
For those younger than me, it is easy to romanticise the past and to underestimate the benefits that research assessment has brought. It is so much better to see young academics receive timely reward for hard work and innovation rather than having to wait for Buggins’ turn. Years ago, academics were often confronted with the difficult choice of having to move the family away to get a promotion, or else wait years for the chance of a personal chair, which always seemed to get pushed one year into the future. Quite rightly, universities these days do the exact opposite. We may now have more chairs than the Albert Hall, but this is far better than the limited slots in the past.
Of course, the modern system has plenty of problems. The job market is getting more cyclical, with too much hiring and too many spiky salary hikes in the lead-up to the census date, followed by two years or so of too little activity. And I’ve been particularly struck this time around by the amount of debate and sheer distress about the possible consequences for staff who have been excluded from submission.
But in my view, most of the problems result not from assessment per se but rather from the way we do it.
Setting the results in stone for the next half-dozen years makes the stakes far too high. The submission process becomes all-consuming for the best part of two years; uncertainties about the assessment process matter too much; and people’s academic dreams can be crushed if they are shunted to teaching-only contracts and left facing a tough road back. If things go really badly, some heads of department may not have a department next time around.
I think we should have the REF every year – but not like the current one. After a comprehensive exercise, we would just need annual adjustments. Papers published in the first year of the previous assessment period would no longer count, and those published in the year after the end of the previous assessment period would come into contention. But as it is only one year’s worth of material that would need to be peer-reviewed, the workload for the subpanels – which would reconvene annually – would be a fraction of what it is now in assessment years. University staff move around, of course, but if they moved within the UK and their papers were included in their previous institution’s submission, they would not have to be reassessed.
Institutional workloads would decline dramatically. Each year, a head of department would merely have to note which of his department’s papers dropped out of the submission and which came into contention, then update the submission accordingly. An impact study might need to be replaced, but otherwise, the only other task would be to detail what – if anything – had changed in the department’s environment since the previous year.
With such a system, the frenzy around submission would be reduced enormously as changes would be minimal and any strategic errors could be addressed in 12 months. Similarly, the spikes in the job market would disappear because there would be none of the sense of “now or never” about hiring staff.
And for individuals, not being submitted to the REF would become a different ball game. There would be a number of academics who were in every submission, but the majority would probably miss the occasional year, and some might miss several. Being out in any one year would not be so traumatic because sensible departments would judge people by the overall proportion of time they were in. A few staff would never be submitted, but then the ensuing conversation about whether a research-based contract was right for them would be far more likely to be focused on the right people.
The introduction to the REF of a measure of impact has only made deans and heads of department even more anxious and fraught. Some places will get it badly wrong and will suffer, while any incentives to scale up the institutional drive for impact will be dampened by the inability to get any traction on the issue until the next REF – probably in 2020. If the government really wants impact to matter, a system that allowed things to update quickly might be beneficial.
The flexibility afforded by an annual REF could also be used to address other issues that we seem reluctant to tackle now. One is gender: we could, for example, give longer publication windows to mothers with young children, and then taper the extension in subsequent years in a smooth way as the children got older.
If you think about it, you realise that the problem with the REF is not that we have too many, it’s that we have too few. Everyone in the “real” world manages to assess themselves in a composed manner every year. Why can’t we?