Metrics, men and mole rats (1 of 3)

April 26, 2012

The story "Restructuring metrics could fail to add up" (19 April) highlights the pernicious atmosphere pervading universities that aim to improve their research standing. Does this ever work? Not if it relies on metrics, it doesn't.

The term "metrics" appears to cover an arbitrary set of unvalidated methods to judge research. It may include the numbers of papers published, the perceived status of the journals involved (frequently reflecting "impact factors"), the order in which authors' names appear on papers, and research grant income. This gives non-scientist administrators a convenient way to assess the quality of scientific research without having to know what it is about, let alone understand it. Why do scientists connive in this?

Eugene Garfield, whose company the Institute for Scientific Information (ISI) (now part of Thomson Reuters) published Current Contents and who invented the journal impact factor, tried to promote citation indices by claiming they could predict Nobel prizewinners. He was very good at predicting winners after they had won, but as far as I know never risked predicting any before they were announced. It is easy to see why.

I've had a look at the publication data for some Nobel laureates, mostly derived from Thomson Reuters' Web of Science (a lineal descendant of Current Contents). I didn't spend long on it, so by simply accepting the data without further validation, I may be doing some an injustice. But they are accurate enough to show massive variations in citations, numbers of papers and h-indices. All the laureates have one thing in common: the papers announcing their unique discoveries are massively cited, as one would expect. Others may not be: for example, John Vane (Nobel Prize in Physiology or Medicine 1982) had more than 120 papers that were not cited by anyone (including himself).

But what do these data really tell us? Is there any linear relationship to quality? I doubt that any scientist would be so foolish as to suggest it. Administrators, on the other hand, might. They might even go further and invoke impact factors - a citation measure not of the articles they wish to evaluate but of other articles submitted to the same journal. It's a bit like giving the same jail sentence to everyone who happens to be present during a riot. It's utterly surreal.

Gavin P. Vinson, Muswell Hill, London

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments

Featured Jobs

Assistant Recruitment - Human Resources Office

University Of Nottingham Ningbo China

Outreach Officer

Gsm London

Professorship in Geomatics

Norwegian University Of Science & Technology -ntnu

Professor of European History

Newcastle University

Head of Department

University Of Chichester
See all jobs

Most Commented

men in office with feet on desk. Vintage

Three-quarters of respondents are dissatisfied with the people running their institutions

A face made of numbers looks over a university campus

From personalising tuition to performance management, the use of data is increasingly driving how institutions operate

students use laptops

Researchers say students who use computers score half a grade lower than those who write notes

Canal houses, Amsterdam, Netherlands

All three of England’s for-profit universities owned in Netherlands

As the country succeeds in attracting even more students from overseas, a mixture of demographics, ‘soft power’ concerns and local politics help explain its policy