Universities ‘must collaborate’ on research assessment reform

Group of leading European research institutions calls for development of ‘next-generation metrics’

五月 8, 2024
A group of parachutists jumping from an airplane on July 25, 2017 in São Paulo
Source: iStock/Mauricio Graiki

Institutions should work together to develop “next-generation metrics” for “responsible research assessment”, the League of European Research Universities (Leru) says.

In a position paper released this week, Leru warns against the use of “one-size-fits-all” methods of academic evaluation, urging universities to tailor their policies to their own “missions” while collaborating with other institutions and funding agencies to “leverag[e] international expertise”.

The paper sets out one “overarching recommendation” to universities: “Use indicators and metrics that are contextually relevant, that support responsible research evaluation, and that align with your institution’s mission. Institutions should collaborate and reuse existing metrics expertise in order to maximise their efficiency in achieving this goal.”

Next-generation metrics might incorporate both newly adopted indicators – progress towards open science, for instance – and existing metrics applied in “novel ways” or in different contexts, the paper says. Citing limited resources, inconsistent data collection and embedded practices as barriers to universities attempting to evaluate research processes effectively, Leru calls on institutions to employ existing expertise and literature while experimenting with new approaches.

Paul Wouters, chair of the Leru experts writing group and emeritus professor of scientometrics at the University of Leiden, recommended that members of the university group team up to conduct “experiments” on new research assessment approaches.

“I could imagine, for example, that three or four Leru universities might collaborate on redesigning the way they do the annual performance interview with their researchers,” he said in an online Q&A.

“Then another three or four universities might work together on implementing the CoARA [Coalition for Advancing Research Assessment] principles with respect to assessment of research groups. And another four universities could bring together their computer and library science people to see what data could most easily be harvested for meaningful metrics with respect to, for example, open science practices or the promotion of gender equality.”

Such experiments, Professor Wouters said, would help to familiarise institutions with new approaches to research assessment. “We are not advocating a complete revolution, where we suddenly do away with everything,” he added.

“In chemistry, for example, where the present indicators do say something useful, the steps would be perhaps a bit more gradual than in fields like history or philosophy, where counting citations makes no sense at all.”

Calling institutions’ “obsession” with global university rankings “misplaced”, Professor Wouters said that in the absence of research assessment reform, “we also run the risk of missing important breakthroughs, because researchers don’t have enough time to spend on developing completely new fields”.

A second expert group must be established to tackle teaching assessment reform, Leru notes. Current metrics such as student satisfaction, Professor Wouters said, often fail to reflect teaching quality. “Someone can be very dissatisfied with a course but still learn a lot and find it very valuable in the long term,” he explained.

emily.dixon@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.