Universal case studies would lighten the REF load

Dominic Dean has a plan to make future assessment exercises easier, quicker and better

三月 12, 2015

Source: Michael Parkin

My proposal essentially is an attempt to end the ­ever-increasing complexity of each round of research assessment

No sooner had the 2014 research excellence framework results been announced than debate began on how closely the next round should resemble it.

It is not easy to devise a process that is academically credible, publicly defensible, relatively free of perverse incentives and reasonably priced. Critics of the current arrangements allege a credibility gap between official claims that each output is thoroughly peer reviewed and calculations of the unbearable burden that would impose on panel members. There have been some strong rebuttals from those involved, but the sheer size of the panellists’ workload is undeniable.

Some claim that using metrics would lead to the same funding outcomes at a fraction of the effort and cost. However, even if this could be convincingly demonstrated, there would still be big questions about the perverse incentives the change might introduce.

I have an alternative, which draws its inspiration from the unlikeliest of sources.

In the lead-up to REF 2014, much trepidation was generated by its new element, impact. But in my experience – borne out by the testimonies in a recent Times Higher Education feature (“Cracking the case studies”, 19 February 2015) – producing the impact case studies proved to be one of the most fruitful and interesting elements of submission, not least because it played to academics’ strengths in presenting narratives grounded in evidence.

I suggest future REFs adopt a similar approach. Rather than submitting up to four outputs for each selected academic, each institution could instead be asked to produce a certain number of narrative case studies describing their contributions to knowledge in a given subject.

The emphasis of assessment would be on the significance of the findings, the rigour of the research and the standard of supporting evidence. The types of evidence used to demonstrate significance could be similar to those used for impact, such as testimony from experts in the field, while rigour could be demonstrated by a strictly limited number of cited research outputs and raw data (for reference rather than review).

As in the 2014 REF, the irrelevance of the format and location of the associated outputs would be emphasised. No reference to journals’ standard of review or impact factor would be permitted, dispelling the notion that “high impact” journals get higher marks.

Furthermore, as much value would be placed on the development of a novel initial finding as on the finding itself, diminishing the existing perverse incentive for academics to publish their new notions in high-impact journals and then move on. Institutions that build on breakthroughs elsewhere would also be permitted to submit such work – just as more than one institution is permitted to claim the same impact. This would support rather than undermine cross-institutional (and interdisciplinary) collaboration.

This approach would also boost confidence in the REF’s rigour. Since quality judgements would fundamentally be based on the case study narrative itself, panels would have no option but to carry out genuine peer review. And since the case studies would be relatively succinct and few in number, there would be ample time for that to be done thoroughly. Introducing a more manageable structure could only improve the REF’s robustness.

How many case studies would be needed? I would envision similar rules as for impact case studies: one for every 10 or so full-time equivalent academics submitted. This would make it virtually impossible for a unit of assessment to submit studies based on the work of an individual and present them as representative of the majority of its staff. A wise submission would instead emphasise the unit’s collective contribution to knowledge (especially if, as I would favour, all eligible academics had to be submitted).

This model would provide a renewed focus on research as a fundamental addition to knowledge, not only as a means to impact. Assuming the case studies were published, it would also have the additional benefit of articulating the importance of research in a more broadly comprehensible format than standard research outputs, allowing the REF to enhance its role in publicly proving the value of UK research.

There are partial precedents for the model, such as the Medical Research Council’s five-year reviews of its funded units, as well as the REF’s origins in the 1986 research selectivity exercise (the latter assessed just five outputs and a four-page general statement). My proposal in essence is an attempt to return to that level of simplicity and to end the ever-increasing complexity of each round of research assessment brought into being by complaints about the imperfections of the previous one.

If metrics are not an acceptable way to break that vicious cycle, it makes sense to take our cue from what worked best in 2014.

Times Higher Education free 30-day trial

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.