For some, the old maxim that "a camel is a horse designed by committee" neatly fits the proposed blueprint for assessing research impact in the research excellence framework. However, there are compelling reasons why it must take this peculiar shape.
Five years ago, I chaired an Australian government committee seeking the optimal method to assess the broad social benefits of academic research. This informed the development of what was an equivalent of the UK's REF.
We found that standard quantitative impact measures - number of patents, spin-off companies, commercialisation income and so on - lacked robustness. They said little about the benefits of work, privileged economic private value over wider public value and had little relevance for basic research, especially in the humanities, arts and social sciences.
One day, my committee secretary phoned me with an idea for a novel metric to capture impact on policy: citations in Hansard. What about counting how often research was mentioned in parliamentary debate? Even better, he went on, whether the research was discussed favourably or not would be a measure of positive or negative policy impact. After a pause, I asked what it would mean if a positive or negative citation was made by government or opposition. The line went quiet. Thankfully, this idea was not mentioned again.
The committee concluded that a lack of robust impact measures made a metrics-only exercise untenable. Australia's previously metrics-bent chief scientist readily accepted a case-study approach.
Since the early 1990s, the research evaluation community has developed a variety of ways to gauge the broad social and economic benefits of research. State-of-the-art methods fuse case studies with robust supporting quantitative and qualitative data.
A similar case-study approach has been proposed in the consultation document on draft panel criteria and working methods for the REF. There are several reasons why this deserves support.
First, scholars in the humanities, arts and social sciences have often been the harshest critics of impact. This is understandable - impact has often been presented as an instrumental economic rationalisation of the value of research. Yet this applies largely to a metrics-only approach; case studies reveal the wider benefits of research. If we disengage with the impact agenda entirely, or reject the principle of impact case studies, a likely alternative will be "one-size-fits-all metrics" that conceal socially and culturally valuable research outcomes.
Second, some natural and social scientists advocate metrics-only impact assessment on the grounds that narratives are "fairy tales" and peer review is "subjective". But this is "impact-lite". Removing the power of narrative explanation and expert judgement will render the evaluation process superficial. "Objective" metrics often gloss over much more than they reveal.
Third, a great deal of research produces social, cultural, environmental and economic benefits that go unrecognised. Impact assessment in the form proposed illuminates this public value. It also strengthens the case for government support for the humanities, arts and social sciences on their own terms.
This is also a cautionary tale, for the Australian committee's recommendations did not come to fruition. With a change of government, a new minister looked at the case-study approach to assessing impact and saw a camel. It was replaced by four streamlined, simple metrics: plant breeders' rights, patents, registered designs and commercialisation income.
The lesson here is that the worst possible response to the consultation is to reject the case-study approach. Of course, there are caveats. The REF methodology is correct in principle but needs fine-tuning. Basic research must remain valued: one solution is for impact to be optional but eligible for rich rewards. And assessments must not stifle impact "in the making" or the efforts of younger research groups.
It is in all our best interests to respond to the REF consultation with constructive suggestions. Otherwise, get the hump and get simple metrics.