Research intelligence - The new rules of the game

Panels near consensus on how research 'impact' should be reported and assessed. Paul Jump reports

November 18, 2010

The dos and don'ts of reporting the "impact" of research began to emerge in last week's report by the chairs of the research excellence framework pilot assessment panels.

The inclusion of an impact measure in the 2014 REF has yet to be agreed, and David Willetts, the universities and science minister, has questioned whether the assessment methods are sufficiently developed.

But as Times Higher Education reported, the chairs of the panels judging this year's pilot agreed that the approach proposed by funding chiefs, based around case studies of impact arising in the past six years from research carried out in the past 15 years, is workable.

The panels, which consisted of roughly equal numbers of academics and "research users", felt that every subject should be able to demonstrate impact provided the term was understood to take in social, cultural, environmental, health and quality of life benefits, as well as economic ones. They also agreed that purely academic impacts - including the training of postgraduates - should be discounted.

The best case studies, according to the report, consist of a "coherent narrative, explaining what research was undertaken, what the claimed benefits or impacts were, and how the research was linked to the benefits".

But while several exemplars were due to be posted on the Higher Education Funding Council for England's website this week, "the quality and clarity of evidence provided in a number of the case studies was not as high as panels would have hoped".

Contrary to expectations that English departments would struggle the most, the lowest scores came in social work and social policy.

The majority of case studies in these fields highlighted the impact of research on policy development, and the panel regarded this as legitimate, even where no policy change had resulted, or where it was too early to assess the impact of change.

But many departments failed to provide hard evidence of their contribution to the debate. "Simply showing that evidence had been provided to inform the policy process was not considered sufficient," the panel chairs reported.

The Earth systems and environmental sciences panel made a similar point. On the other hand, the panels' collective recommendations note that "it should not be necessary that the institution was involved in exploiting or applying the research" provided it caused the impact.

Most submissions to the physics panel related to the development of products and services. But evidence tended to be scant where research and development was carried out by industry, often because of commercial confidentiality and, in defence research, the Official Secrets Act.

However, the panel thought it should be possible to agree statements that satisfied REF panels without giving away secrets, and noted that panels were "likely to be sceptical of the stated impact if commercial confidentiality is given as the reason for withholding information".

Nor did the physics panel think the problem of distinguishing individual departments' contributions to large collaborative projects was insurmountable.

"We had a number of panellists who had been on the 2008 research assessment exercise panel and they felt (attribution) would not be any more difficult for impact than it is for outputs," Peter Saraga, the panel chair, told THE. "But you must have a contribution that is distinctive and significant."

Submissions to the clinical medicine panel tended to relate to healthcare improvements or economic benefits to the pharmaceutical industry.

The panel agreed it was "challenging" to assess these on the same scale, but they declined to mark down the latter because they felt it was important to encourage academic interaction with industry. The Earth systems panel appeared to disagree, putting "outcomes" above "commercial value to companies".

The medicine panel also seemed to break ranks with its view that even though the quality of outputs was assessed elsewhere, the REF "impacts arising from innovative science, novel discovery or unique interactions all should be recognised as at a premium". A similar point was made by Judy Simon, chair of the English panel, who said that the excellence and impact of research were inseparable and had been considered holistically by her panel.

Hefce's official line is that panels should merely assure themselves that research is at least 2* in quality, and then concentrate on the "significance" and "reach" of its impact.

Public engagement

Another fraught area was public engagement. The panels emphasised that even academic celebrities fronting TV series would count for little if the content was not directly related to the department's research and if hard evidence of tangible impact was not presented, such as viewing figures, satisfaction surveys or other indicators of increased public understanding.

Meanwhile, although the English panel found it easier to think in terms of "benefits" rather than "crudely economic" impact, they were surprised to receive so few case studies highlighting academics' economic impact on the creative economy, especially publishing.

"We would have welcomed a case study that revealed, for instance, how even a specialised monograph creates employment for commissioning editors, copy editors, production, promotion and distribution staff, as well as, for example, typesetters in India and printers in China," they said.

Please login or register to read this article.

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Have your say

Log in or register to post comments