Metrics, men and mole rats (1 of 3)

四月 26, 2012

The story "Restructuring metrics could fail to add up" (19 April) highlights the pernicious atmosphere pervading universities that aim to improve their research standing. Does this ever work? Not if it relies on metrics, it doesn't.

The term "metrics" appears to cover an arbitrary set of unvalidated methods to judge research. It may include the numbers of papers published, the perceived status of the journals involved (frequently reflecting "impact factors"), the order in which authors' names appear on papers, and research grant income. This gives non-scientist administrators a convenient way to assess the quality of scientific research without having to know what it is about, let alone understand it. Why do scientists connive in this?

Eugene Garfield, whose company the Institute for Scientific Information (ISI) (now part of Thomson Reuters) published Current Contents and who invented the journal impact factor, tried to promote citation indices by claiming they could predict Nobel prizewinners. He was very good at predicting winners after they had won, but as far as I know never risked predicting any before they were announced. It is easy to see why.

I've had a look at the publication data for some Nobel laureates, mostly derived from Thomson Reuters' Web of Science (a lineal descendant of Current Contents). I didn't spend long on it, so by simply accepting the data without further validation, I may be doing some an injustice. But they are accurate enough to show massive variations in citations, numbers of papers and h-indices. All the laureates have one thing in common: the papers announcing their unique discoveries are massively cited, as one would expect. Others may not be: for example, John Vane (Nobel Prize in Physiology or Medicine 1982) had more than 120 papers that were not cited by anyone (including himself).

But what do these data really tell us? Is there any linear relationship to quality? I doubt that any scientist would be so foolish as to suggest it. Administrators, on the other hand, might. They might even go further and invoke impact factors - a citation measure not of the articles they wish to evaluate but of other articles submitted to the same journal. It's a bit like giving the same jail sentence to everyone who happens to be present during a riot. It's utterly surreal.

Gavin P. Vinson, Muswell Hill, London

Please Login or Register to read this article.

请先注册再进行下一步

获得一个月的无限制地在线阅读网站内容。只需注册并完成您的职业简介.

注册是免费的,而且非常简单。一旦成功注册,您可以每个月免费阅读3篇文章。:

  • 获得编辑推荐文章
  • 率先获得泰晤士高等教育世界大学排名相关的新闻
  • 获得职位推荐、筛选工作和保存工作搜索结果
  • 参与读者讨论和公布评论
Register