First, the good news. Global university rankings have "certainly helped to foster greater accountability and increased pressure to improve management practices", according to a recent report from the European University Association. In addition, they have "encouraged the collection of more reliable data and in some countries have been used to argue for further investment in higher education", says the report, Global University Rankings and their Impact, by Andrejs Rauhvargers of the University of Latvia.
But for those of us who rank, that's about as good as it gets.
One significant charge that the report levels against existing world university rankings in general is that they are simplistic, which is a valid point.
Global rankings are inherently crude – they are based on data that are available on a global scale and can be compared fairly across national boarders. They simply cannot capture much of what matters most in our universities – the way in which inspirational teaching can transform lives, for example.
But as long as those who rank are clear about the limitations of their data, the judgements they make and the proxies they employ, responsible rankings can provide an essential analytical tool for universities, their staff and students, policymakers and businesses.
Moreover, they provide such a tool at a time when higher education is globalising rapidly, and when there has never been a greater hunger for accessible, comparative, performance information between institutions and countries. The best rankings fill a significant information gap.
Much of the rest of the report states the blindingly obvious or rehearses well-trodden ground: the ranking providers' subjective judgement determines which indicators are more important; rankings "reflect university research performance far more accurately than teaching"; that existing indicators on teaching are all proxies. These are hardly shocking revelations.
But some of the report is simply wrong: it says, for example, that the reputation survey used by Times Higher Education is not described in "sufficient detail" to judge its quality. The full survey methodology and the survey instrument itself is readily available online (http://science.thomsonreuters.com/globalprofilesproject/gpp-reputational).
Then the report gets rather bizarre. It bemoans the fact that the rankings "cannot provide a diagnosis of the whole higher education system as they usually concern the top research universities only".
It points out that the rankings include only about 1-3 per cent of the 17,000-plus universities in the world.
"More than 16,000 of the world's universities will never obtain any rank…not only research universities deserve consideration, but also universities that are regionally important or those targeted at widening access to higher education with a view to involving a wider cohort of young people," the report laments.
But for Times Higher Education, the whole point is that we list only 1 per cent of the world’s institutions. Our world university rankings name only the world’s top 200 research-intensive universities. This means that we focus on comparing universities that may have different histories, structures and cultures, but which share a common global outlook: they publish their research in the world’s leading international journals at a high volume, and operate in a highly competitive global market for both staff and students, and for investment.
This approach does not preclude diversity within national university systems. It rejects a "one-size-fits-all" approach to higher education, and it absolutely does not imply that institutions with different missions and with a national, or local, focus are somehow inferior. It just accepts that they are different and should not be judged on the same criteria as institutions with very different priorities.