Ben Wildavsky charts the rise and rise of university rankings, domestic and international
Why do we rank schools and universities – or sports teams and washing machines, for that matter? To answer this question, we need to go far, far back in human history – to the caveman era. That, at least, is the theory of one prominent ranker, veteran journalist Jay Mathews of The Washington Post. He is the father of the Newsweek and Washington Post rankings of US high schools, and has also written extensively on the university-industrial complex. By Mathews’ reckoning, we must never forget that Homo sapiens is a tribal primate. We love pecking orders that help us understand who is up, who is down and where we stand ourselves. This is why, somewhere in the mists of time, Rankings Man was born.
To be sure, it took many millennia before our innate rankings impulse could be applied to universities. It wasn’t until 1874 that hereditarian Francis Galton published English Men of Science: Their Nature and Nurture, which tallied the universities attended by more than 100 scientists – notable among them Oxbridge and a variety of “Scotch, Irish or London universities”.
Galton didn’t attempt an actual ordinal listing by quality, however. That first happened in 1910, when American James McKeen Cattell used a complex methodology (sound familiar?) to create a table showing how 20 US universities stacked up against one another based on the number of accomplished scientists they employed.
In the decades that followed, the sifting and sorting of universities continued in various forms. But the Big Bang of today’s rankings occurred in 1983 with the advent of US News & World Report’s “America’s Best Colleges” issue. Initially a straightforward reputational poll of college presidents, the magazine’s rankings soon became more data-driven, incorporating information on graduation rates, student qualifications and more. The US News guide became hugely popular, much to the consternation of college presidents and other critics.
Perhaps inevitably, many other publications capitalised on the rankings mania, including Forbes, The Wall Street Journal and Business Week. Niche marketing emerged, too, as with Sierra magazine’s “greenest” colleges list and Princeton Review’s annual “top party schools” roster.
Meanwhile, the rest of the world took notice. National-level league tables emerged in more than 40 countries, from Argentina to Pakistan. It was only a matter of time before the growing global university marketplace, featuring increasingly mobile students, professors and branches, would lead to global academic rankings.
In 2003, Shanghai Jiao Tong University created the first closely watched worldwide league table, focused heavily on scientific research. The following year, the Times Higher Education Supplement launched its own rankings, featuring a much heavier reliance on reputational surveys. Before long, students and policymakers seeking cross-border college comparisons could consult everything from Spain’s Ranking Web of Universities – Webometrics to the Russian Global Universities Ranking.
Humans’ deep-seated preoccupation with pecking orders is, of course, no guarantee of agreement on how we decide who should be top dog. Like their US antecedents, global rankings efforts have been lambasted for everything from an excessive focus on research to an undue emphasis on reputation, from poor data quality to elitism. Some critics have even protested their low standing by creating counter-rankings: for example, the French engineering school Mines ParisTech undertook one such revisionist exercise, leading to the classic 2008 headline “French Do Well in French World Rankings”.
Today, one of the biggest league table controversies involves the European Union’s U-Multirank project. This new kid on the block aims to usher in an era of build-your-own rankings. It will collect data from a broad spectrum of universities in five areas – research, teaching and learning, international orientation, knowledge transfer and regional engagement – to capture the many dimensions that make them tick. Institutions will be sorted only by category, not overall; users can construct their own league tables based on the characteristics they care about most.
It is an ambitious and, in many ways, laudable endeavour. But critics, including the League of European Research Universities, say that it depends too much on data that are not reliable and cannot be properly compared across nations. Many believe the detractors are in fact worried that U-Multirank’s holistic approach will give a leg up to continental European universities and challenge the incumbent world-leading research institutions in the UK and the US.
So what does all this tell us about the past, present and future of university league tables? We know that rankings have become considerably more sophisticated over the past century since the days of counting scientists. We know that they continue to be vulnerable to a range of criticisms, some legitimate. At the same time, this truism is true: rankings are not going away. In fact, they are spreading: even Barack Obama has proposed rating US universities according to access, affordability and student outcomes.
Slowly but surely, rankings are getting better. After all, today’s competition is not just for dominance between universities, but between the league tables themselves. Surely, Rankings Man would approve.
Ben Wildavsky is director of higher education studies at the Rockefeller Institute of Government, State University of New York, and policy professor at SUNY-Albany.