The conversation has begun in earnest. Four months after the publication of the 2010-11 World University Rankings, Times Higher Education is now formally consulting on refinements for the 2011-12 tables.
We have been delighted with the success of last year’s rankings. In setting a new standard for such exercises, they were recognised as the global authority. But no rankings system is perfect, and Times Higher Education is determined that its tables submit to continuing scrutiny to ensure that they keep improving.
Although the public debate never really stops, a number of suggestions for specific refinements were made last week, and more will be made in the coming weeks. The 50-plus members of our expert advisory group will receive a series of consultation documents, and their feedback and views will be formally collected.
At the moment, the priority issue concerns our use of citations information.
The “research influence” indicator was the most heavily weighted of the 13 rankings indicators used last year. It drew on some 25 million citations from 5 million articles published over five years to examine the number of citations for each paper published by an institution.
We believe that we were absolutely right to normalise this citations data for research fields.
Some rankings systems use citations without taking account of the dramatic variations in citations volume by field. Such an approach is unacceptable, according to one of the world’s leading bibliometricians, Anthony van Raan of the Centre for Science and Technology Studies (CWTS) at Leiden University.
I had the opportunity to hear Professor van Raan’s presentation on rankings at a conference hosted by the Informatics Institute at the Middle East Technical University in Ankara late last year. He made clear that the variations in citation habits and volume between fields were so significant that normalisation was “extremely important” to draw fair comparisons. He went as far as to say that it was a “mortal sin” to cast citations in an “absolute manner”.
The issue was brought up again this month in a paper to the RU11 group of 11 leading research universities in Japan. It was written by Simon Pratt, project manager for institutional research at Thomson Reuters, which supplies the data for THE’s World University Rankings.
Explaining why THE’s rankings normalise for citations data by discipline, Pratt highlights the extent of the differences. In molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009, he writes; in mathematics, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.
Obviously, an institution with world-class work in mathematics would be severely penalised by any system that did not reflect such differences in citations volume.
So while we made a giant step forward in normalising our citations data by field for last year’s rankings, the decision to normalise for region in the same tables proved rather more controversial.
As our data partner Thomson Reuters explains in a bulletin for our advisory group: “There are significant contrasts in citations behaviour and patterns in different geographic regions. It is argued that not all of these are indicative of underlying research impact. For example, universities in the US are part of a very large research community…which may lead to higher innate citation rates than their peers in developing countries…”
A modification to normalise citations by region can help spotlight exceptional institutions in typically low-citing countries. Such a change can also result in a more diverse rankings table that highlights excellence in developing countries.
But the modification made for 2010-11, Thomson Reuters says, “created some anomalies where smaller institutions with good but not outstanding impact in very low-cited countries benefited disproportionately.
“For the future we need carefully to consider what approach should be used to address geographic balance, while ensuring that we do not unduly boost the performance of developing institutions.”
How to achieve that balance is something that we’ll be considering very carefully over the coming months.