These metrics don't measure up 3

June 23, 2006

Last week's Department for Education and Skills consultation document presents five models, all of which are based on external research income and none on the main alternative, bibliographic data.

The outputs of the five models appear to be poorly correlated, either with each other or with current research assessment exercise ratings. A better metric is the number of journal citations per head accrued by the staff of each institution. This bibliometric approach is straightforward, cheap and transparent. As we showed in a study several years ago (www.pc.rhul.ac.uk/vision/citations.pdf), it gives extremely similar results to the existing RAE process.

There are limitations: in particular, older, established staff are favoured at the expense of new researchers, and different disciplines (and indeed subdisciplines) have different citation practices. But it is possible to correct such problems with objectively based weighting factors. In any case, such biases tend to average out when whole institutions, or even departments, are compared.

The method cannot be manipulated: self-citations can be discounted and it is impossible to increase one's citations by others on anything more than a very modest scale. It is immune to grade inflation and to false accusations of such. It is applicable to all disciplines. Perhaps most important, it is a truly international measure.

Whether we should move to metrics at all is a valid question. But if we do, the metrics must be primarily bibliographic if they are to yield impartial and defensible assessments of research quality, rather than simply reflecting the current trends in research council fund-ing.

Andy Smith

Royal Holloway, University of London

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored