The rise of global rankings has transformed higher education for ever," says Simon Marginson, professor of higher education at the University of Melbourne.
For Marginson, rankings are "creating one single worldwide research-university sector, which provides the basis for a one-world knowledge system and, ultimately, a single world culture - in which diversity will continue to abound, but held within one container".
So, anyone who thinks that university rankings are a bit of fun - an inherently shallow service to student consumers good only for selling newspapers - may need to think again. It is not only students, parents and university marketing staff who are taking them seriously.
Marginson believes that rankings are "changing history, not just in higher education, but in all the social, economic, cultural and governmental sectors affected by higher education. In other words, the ranking systems - and the single worldwide higher education sector they embody and create - will change almost every sphere of human activity."
Many will disagree with Marginson's dramatic analysis - and some in the global academic community continue to studiously ignore rankings on principle - but there is no question that global university league tables have arrived ... and they are not going away.
There are at least seven widely used systems that seek to compare institutions globally, not least Times Higher Education's own annual World University Rankings. And more are on the way - the European Commission is funding the development of an international university performance comparison.
There is also no doubt that global rankings are changing the behaviour of students, academics, university managers and governments. Ellen Hazelkorn, head of the Higher Education Policy Research Unit at the Dublin Institute of Technology, has made the growing influence of rankings the subject of her latest book, Rankings and the Battle for World-Class Excellence: How Rankings Are Reshaping Higher Education.
She has catalogued their extraordinary influence: from the Dutch immigration law that prioritises for entry foreigners with qualifications from the top 150 universities, to the launch of multibillion-dollar national initiatives designed to build world-class universities (see page 38).
How did league tables come to occupy such an influential position?
For Marginson, the rankings "have become an inevitable part of public life because universities have moved to centre stage in all modern societies".
The prominence of higher education and the concomitant rise of ratings grow out of three key developments, he says.
First, mass higher education has become the norm across the world.
"In the nations of the Organisation for Economic Cooperation and Development and in rising East Asia, between a third and a half of all people now attend higher education institutions at some point in their lifetimes, and that experience shapes their economic and social opportunities," Marginson says.
Second, research and innovation are "key to most products and services we use", and the basic research conducted in universities is "the most fecund source of new ideas".
"The third is about policy. For governments, higher education has become strategic in several ways - as the place where social opportunities are provided, as the source of many innovations, and as a site of global networking," he adds.
"Higher education is not simply more important than before: it is also more global."
Ben Wildavsky charted the extraordinary and rapid internationalisation of higher education in his recent book, The Great Brain Race: How Global Universities Are Reshaping the World. It devotes a whole chapter to the rise of world rankings.
The facts cited by Wildavsky speak volumes: from 1999 to 2009, the number of students attending a university outside their home countries rose by 57 per cent to 3 million; cross-border scientific collaboration (measured by co-authored journal articles) has more than doubled since 1990; half of the world's top physicists no longer work in their home countries; and universities have set up 162 "branch campuses" operating outside their national borders, representing an increase of 60 per cent in just five years.
Wildavsky, a senior Fellow at the Ewing Marion Kauffman Foundation, tells THE: "We now have a global academic marketplace. It seems to me that education markets, like other kinds of market, need information to function effectively. We're also living in the age of accountability, so rankings aren't going away."
Indeed, all the signs are that international rankings will continue to proliferate.
The first exercise to hit the global scene, in 2003, was Shanghai Jiao Tong University's Academic Ranking of World Universities. Based exclusively on research performance, primarily in science, the ARWU began life, Wildavsky recounts in his book, as an internal exercise designed to benchmark Shanghai Jiao Tong's position in world higher education. But its conception coincided with the Chinese government's drive to build an elite cadre of world-class universities, and its publication via the internet prompted massive interest from the academy and the wider public worldwide.
The ARWU was followed in 2004 by THE's attempt to take a broader look at what makes a world-class university. In contrast to the ARWU's research-only approach, THE's previous world university rankings (2004-09) included a proxy indicator of teaching quality (a simple staff-to-student ratio) and used a subjective survey of universities' reputations.
THE's effort, according to Wildavsky in The Great Brain Race, "far surpassed the Shanghai rankings in generating controversy".
Since the Shanghai Jiao Tong and THE rankings made their debuts, they have been joined by others, including a world ranking from one of France's grandes ecoles, MINES ParisTech, called the Professional Ranking of World Universities (in which the French do remarkably well). There is also a table from the Higher Education Evaluation and Accreditation Council of Taiwan based on research papers, and Russia's RatER, which has raised eyebrows for ranking Moscow State University as fifth in the world, ahead of Harvard University and the University of Cambridge. There is also Spain's Webometric Ranking of World Universities, which is based purely on an institution's "volume and visibility" online.
Even THE's old 2004-09 rankings-data supplier, QS, is planning its own effort, after THE decided to develop a new methodology and source all its rankings data from a new partner, Thomson Reuters.
In The Great Brain Race, Wildavsky says: "The biggest players in the university rankings game are Shanghai Jiao Tong and ... Times Higher Education, which have created widely followed worldwide rankings that have come to be the arbiters of how well universities are faring in the global pecking order."
But he adds: "They have garnered scorn and influence in roughly equal measure."
So what are the problems? When it comes to criticising the wide array of university rankings, Philip Altbach pulls few punches.
According to the director of the Center for International Higher Education at Boston College, who is also a member of THE's editorial board: "Many are complete nonsense - measuring the unmeasurable, using ridiculous methodologies to come up with rankings of academic institutions, schools and faculties (especially business schools), and in general trying to feed a seemingly insatiable appetite for answering the questions: 'How are we doing?'; 'Where should I go to school?'; or, perhaps, 'How can our institution best compete for prestige or market share?'"
The problems with THE's old QS rankings methodology have been well documented: some 50 per cent of the scores was based on the results of subjective opinion surveys (40 per cent from academics and 10 per cent from graduate employers), but response rates were low; teaching quality was judged solely through the proxy of staff-to-student ratios, worth 20 per cent of total scores; and citations data used to judge research quality were not normalised to take account of dramatically different citation and publication habits between disciplines, unfairly disadvantaging institutions strong in areas with low average citation levels.
THE's use of subjective measures has been one of the most controversial elements in its rankings.
Research by Nicholas Bowman, postdoctoral research associate at the University of Notre Dame's Center for Social Concerns, and Michael Bastedo, associate professor of education at the University of Michigan's Center for the Study of Higher and Postsecondary Education, identified the "anchoring effect" at work in THE's old reputational survey.
They found that an institution that rated highly in the first year of the rankings fared significantly better on the reputational survey in the second year. They said this suggested that "the rankings drive reputation, and not the other way round".
Shanghai Jiao Tong's ARWU has also had its fair share of criticism. Should you Believe in the Shanghai Ranking?, a 2009 report by a trio of French academics led by Jean-Charles Billaut, a professor at the University of Tours, answers the question rather emphatically. It says that the criteria used are irrelevant and the aggregation methodology is flawed.
"The Shanghai ranking," it says, "in spite of the media coverage it receives, does not qualify as a useful and pertinent tool to discuss the 'quality' of academic institutions, let alone to guide the choice of students and families, or to promote reforms in higher education."
As well as methodological limitations, there is also the problem that both rankers and universities make mistakes with the data.
In a recent newspaper column, the vice-chancellor of Universiti Kebangsaan Malaysia, Sharifah Hapsah Shahabudin, recollects one of the most notorious errors. She describes how "the jubilation of one local university for being in the top 100 was rudely shattered when it transpired that its Chinese and Indian Malaysian students were classified as 'international'. When the mistake was rectified the following year, the ranking dropped dramatically."
More ominously, she speaks of the deliberate manipulation of rankings data by universities keen to raise their positions. "It is known that some institutions indiscriminately and rapidly recruit international faculty and students to boost the scores on 'internationalisation'," she writes. "This is playing the 'ranking game', which is detrimental to local talent."
This game is widely played. In the US, there was something of a furore last year when the news website InsideHigherEd reported on a public confession made at the Association for Institutional Research conference.
Catherine Watt, a former institutional researcher at Clemson University in the US, described the steps her university had taken to improve its position in the domestic US News & World Report rankings. She said it had sought to "affect - I'm hesitating to use the word 'manipulate' - every possible indicator to the greatest extent possible".
She said Clemson had increased the proportion of its classes with fewer than 20 students by allowing large classes to grow even larger. "Two or three students here and there, what a difference it can make," she was reported to have said. "It's manipulation around the edges."
Research by Marny Scully, executive director of policy and analysis at the University of Toronto, has demonstrated how a student-to-staff ratio of anywhere from 6:1 to 39:1 can be achieved from the same figures, depending on their interpretation.
For Malcolm Grant, provost of University College London, there is a danger that rankings, whatever indicators they use, have become so influential that they force institutions to act against their own best interests.
"There is pressure on vice-chancellors around the world to improve their position in the rankings: this is becoming a key performance indicator in some countries, but it is a false god."
Grant says that there is a lot of truth in Goodhart's law - that when a measure or indicator becomes a target, it ceases to be a good measure.
Might, for example, a financially constrained university employ an expensive Nobel laureate at the expense of several postdoctorates in a field with more pressing needs in order to improve its ranking position? Might an Asian university force staff to publish in high-profile English-language journals when local needs would be better served through local research outlets?
Grant says: "Universities need to define what their own mission is and how they are going to achieve it, and then develop for themselves the indicators they need to benchmark themselves against others.
"There is a risk of driving a herd of institutions to ape the world-class comprehensive university when they should be much more differentiating about their own missions."
A report from the League of European Research Universities (Leru), published on 24 June, concurs. University Rankings: Diversity, Excellence and the European Initiative, by Geoffrey Boulton, a Fellow at the University of Edinburgh, states that "different universities fulfil different roles, which a single monotonic scale cannot capture".
It says that universities "tend to target a high score irrespective of whether the metrics are good proxies for the underlying value of the institution", and adds that rankings could even "undermine" institutional values.
Grant says: "The upside is that rankings have prompted an enormous amount of interest around the world in universities and in global comparisons. This has had a significantly positive impact that we should not decry.
"But if that goes too far - if too much weight is given to far too slender a methodological framework - then it is a problem. It is not that rankings are inherently evil, but they are compelled to carry far more political weight than they are capable of bearing."
One solution to growing concerns about the behavioural effects of rankings is the proposed development of "multi-dimensional" approaches, which would allow universities to be compared on a range of criteria, according to their size and mission.
James Campbell, an academic at Deakin University in Australia and a visiting researcher at Universiti Sains Malaysia's Centre for Policy Research and International Studies, notes in a recent article published in The Star, an English-language Malaysian newspaper: "The complexity of what different higher educational institutions offer to students, and the possibility that these institutions cater to students in different ways, is lost when we reduce this to a simple number in a rankings scale.
"The notion that one can have such a precise and accurate rendition of a higher educational institution's value with reference to a ranking would be amusing, if not for the fact that the consequence of believing in such a measure is so serious."
The European Commission's U-Multirank project, a system designed to reflect the diversity of global higher education, explains on its website that "defined rankings should include and compare similar and comparable programmes or institutions in terms of their missions and profiles".
The project, run by a consortium of seven European groups, including the Center for Higher Education Policy Studies at the University of Twente in the Netherlands, is at the pilot stage. It is collecting data in two subject areas (engineering and management) from about 150 institutions, with preliminary results expected late next year.
Boulton's Leru report welcomes the idea of U-Multirank as an "antidote to single monotonic lists" and as a legitimate way to explore "the potential to mitigate the problems of other systems".
But even Leru, some of whose members are helping with the project, raises concerns about U-Multirank.
Its report says that the scheme "suffered from imprecise proxies and the profound difficulty of finding comparable data between countries". It warns that the "temptation" will be to "require ever more burdensome detail", promoting further "the idea of the university as merely a source of modular products currently in vogue".
And for Grant, there is another danger. "I would resist the U-Multirank", he says, "because once a ranking system is funded by government, it has an official status and is bound to have an even more distorting influence."
Official or not, league tables are regularly consulted by politicians and policymakers, which makes it crucial that they be as cogent, coherent, accessible and honest as possible. That is certainly the aim of THE's overhaul (see box, page 36).
"We believe that the changes will make the rankings more sophisticated, rigorous and accurate," says Ann Mroz, editor of THE. "But we also will ensure that they are transparent and come with clear and open health warnings - making it clear that we are using indicators, not measures, and highlighting areas where compromises and judgement calls have been made."
The effort to improve rankings is timely, for it is clear that despite their flaws, they can have positive effects, too.
A 2009 report from the US Institute for Higher Education Policy found that rankings can "prompt change in areas that directly improve student-learning experiences" and "foster collaboration, such as research partnerships, student and faculty exchange programmes".
They even "encourage institutions to move beyond their internal conversations to participate in broader national and international discussions".
Wildavsky observes: "There is no question that rankings have many flaws, but as they are improved I think they will become increasingly useful to students, to universities and to policymakers.
"Rankings have already encouraged a culture of greater transparency, along with more attention to external benchmarks, at universities that in the past have tended to be too insular."
Wildavsky says that "a proliferation of global rankings is a healthy thing, particularly if they're transparent and based on sound data.
"Of course, there will probably be more silly rankings as well - maybe we'll have a global version of The Princeton Review's 'top party schools' list. I'm sure that would be a highly competitive category."
HIGH DEFINITION: TIMES HIGHER EDUCATION FINE-TUNES ITS RANKINGS
Everyone producing rankings of global higher education has a duty to ensure that their work is as transparent and as rigorous as possible - and to be open about the limitations of what they do, says Ann Mroz, editor of Times Higher Education.
"We have had a thorough, open and honest look at the limitations of our rankings," she says. "We listened to the critics and decided to act. We have been frank about what we felt was not good enough about the old rankings, and we have consulted widely with the global university community to come up with a better, larger and more balanced set of indicators."
THE has confirmed a number of moves to improve the "rigour, balance, sophistication and transparency" of its new World University Rankings, which are due out this autumn. Most eye-catchingly, it is sticking with a reputational survey, but has invested heavily in improving the measure's data while at the same time reducing its prominence in the rankings.
THE's new data provider, Thomson Reuters, enlisted professional polling firm Ipsos MediaCT to conduct an invitation-only survey of experienced academics. They were targeted to be representative of both the global spread of subject disciplines and the global geography of higher education. The 2010 survey received 13,388 responses - four times greater than anything achieved in a single year under the previous survey - across all regions and disciplines.
The plan is for the new rankings to be built on 13 indicators, more than double the six used in the THE-QS regime. The aim is to reduce the reliance on (and potential for manipulation of) any single indicator.
The 13 individual categories will be grouped to create four overall indicators, representing the core of a university's activities, to produce the final ranking score. The core aspects that will be assessed are: research; economic activity and innovation; international diversity; and a broad "institutional indicator" including data on teaching reputation, institutional income and staff and student numbers.
Research indicators are likely to include: academic papers (scaled for the size of the institution); paper citation impact (normalised by subject); income (scaled); income from public sources and industry; and the results of a reputational survey for research.
The economic activity/innovation indicators will for the first year use a figure for research income from industry (scaled against staff numbers).
International diversity will be measured by a ratio of international-to-domestic students and international-to-domestic staff.
The institutional indicators are likely to include: undergraduate entrants (scaled against the numbers of academic staff); PhDs against undergraduate degrees awarded; PhDs awarded (scaled for institutional size); institutional income (scaled); and the results of the reputational survey for teaching.
"This is the start of our efforts to make these rankings more sophisticated," Mroz says. "It is very important that we are transparent in whatever we do and that we make clear when indicators, rather than measures, are being used.
"We are not seeking to pull the wool over anyone's eyes: we will be clear about where we have made compromises and judgement calls.
"We hope that people will agree that we have made a decent first attempt at improving the rankings, but we will publish and discuss any criticisms and constructive advice for future iterations."
THE KEY PLAYERS
Academic Ranking of World Universities
Produced by Shanghai Jiao Tong University
The ARWU examines all universities whose faculty include Nobel laureates or winners of the Fields Medal for mathematics, plus researchers who have published papers in Nature or Science or whose work is frequently cited. It also looks at the overall number of academic papers at universities around the world indexed in the Science Citation Index, the Social Sciences Citation Index and the Arts and Humanities Citation Index, all owned by Thomson Reuters.
Times Higher Education World University Rankings
Produced by Times Higher Education (powered by Thomson Reuters data)
The methodology of the new rankings is still out for consultation with key figures in the academic world. The proposal is to use 13 indicators, grouped to create four broad markers representing universities' core activities, to produce the final ranking scores. The core aspects that will be assessed are: research; economic activity and innovation; international diversity; and a broad "institutional indicator" including data on teaching reputation, institutional income and staff and student numbers.
Webometrics Ranking of World Universities
Produced by Cybermetrics Lab, part of Spain's Consejo Superior de Investigaciones Cientificas
The organisation says: "Using quantitative methods, the Cybermetrics Lab has designed and applied indicators that allow us to measure scientific activity on the web."
The ranking assesses factors such as the size of a university's web presence (based on the number of pages recovered from search engines such as Google) and the online visibility of the institution (based on the number of inbound external links).
QS World University Rankings
Produced by Quacquarelli Symonds
QS, former data supplier to the Times Higher Education-QS World University Rankings, confirmed that it would continue to produce a ranking after THE decided in 2009 to develop a new methodology with data from Thomson Reuters.
QS will use six indicators: academic reputation (40 per cent), an employer survey (10 per cent), citations (20 per cent), staff-student ratio (20 per cent) and data on the proportion of international staff and students on campus (5 per cent each).