The most important performance indicator in the Times Higher Education World University Rankings is the one that uses journal article citations to evaluate “research influence”.
For the forthcoming 2011-12 rankings, our data partners Thomson Reuters looked at about 50 million citations to more than six million papers published over a five-year period.
We are satisfied that across a university the number of citations that peer-reviewed journal papers receive from other scholars provides a robust and widely accepted indication of the significance and relevance of research.
Thomson Reuters, which owns the citations database used, performs sophisticated analyses to ensure the data are properly normalised to take into account the differences in publication habits and hence citation levels between different fields. This ensures that all universities are treated fairly.
So we are happy that this performance indicator receives the highest weighting of the 13 employed by the rankings (it was worth just under a third of total scores last year).
But of course, it is not without controversy. Some object to the reliance on citations data in principle; others have more specific objections to how the data are analysed.
The biggest concern with the indicator last year centred on the influence of exceptionally highly cited papers on the overall performance of smaller universities. Exceptionally high “research influence” scores for Alexandria University in particular caught the eye, and helped it to do well in the rankings. It was not alone.
We drew attention to such anomalies in the interests of transparency and to open a debate on potential improvements for 2011-12.
I’m delighted to say that the debate was highly productive, and we can now confirm that we have been able to refine the way we examine the citations data to address these concerns.
The indicator counts citations from all indexed journals published during a five-year window (2005-2009) for this year’s rankings. But an important change is that we have extended the window within which the citations from those publications are counted by an additional year, to 2010.
Citations usually take time to accumulate, but some exceptional papers can pick up a high volume in the year of publication. This will often mean that when benchmarked against the year in their subject, they can become extreme statistical outliers. In small institutions with a relatively low volume of publications, having your name affiliated with such papers can push up your overall research influence score disproportionately.
The additional year will help to reduce the disproportionate impact of such papers.
In a further move to reduce the outlier effect, we have also raised the minimum publication threshold below which institutions are excluded from the rankings. For the 2011-12 tables, only universities that have published at least 200 research papers a year (up from 50) are included.
Another area for improvement concerns how we moderate the research influence score to take into account institutional location.
Last year, on the advice of our expert advisers, we sought to acknowledge excellence in research among institutions in developing nations with less-established research networks and lower innate citation rates.
To achieve this we applied a regional modification to the data.
Simon Pratt, project manager of Institutional Research at Thomson Reuters, which collects and analyses the data, said: “While this was effective in identifying regionally excellent research, the approach unduly favoured those countries with developing economies and a focus on applied science. This year we have improved the regional modification to take into account the subject mix of the country.
“The result is that some institutions in countries with a focus on subjects with low citation rates, such as engineering and technology, will still have their citation impact raised by the modification, but less than last year. Correspondingly, some institutions in countries focusing on highly cited subjects, such as medical and biological sciences, may find that the regional modification will lower their citation impact to a lesser extent.”
These refinements have been made after detailed consultation and careful consideration. They mean that direct comparisons with last year must be made with caution. But they also mean that the 2011-12 World University Rankings will be the most sophisticated, and carefully calibrated, ever published.