This is why we publish the World University Rankings

Phil Baty sets out why the World University Rankings are here to stay – and why that's a good thing

February 8, 2017
Improvement, performance, rankings, success
Source: iStock

“There is no world department of education,” says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: “They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.”

This is the true purpose and the enduring legacy of the THE World University Rankings.

Of course the rankings bring great insights into the strengths and shifting fortunes of individual research-led universities. We assess universities’ performance with the most comprehensive and balanced ranking in the world, using 13 performance indicators covering all their key missions (teaching, research, knowledge transfer and international outlook).

The results, a vital resource to students and their families as well as to academics and university administrators and governments across the world, help to attract almost 30 million people to our website each year, and as they make headlines around the world they touch hundreds of millions more individuals.

But amid the annual media circus around the news of who is up and who is down, and beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE’s global rankings is often lost: the fact that we are building the world’s largest, richest database of the world’s very best universities.

Let me be clear: there is no such thing as a perfect university ranking. There is no “correct” outcome as there is no single model of excellence in higher education, and every ranking is based on the available, comparable data, and is built on the subjective judgement (over indicators and weightings) of its compilers. THE developed its current methodology based on more than a decade of experience in rankings, after more than a year of open consultation and with the detailed expert input of more than 50 leading figures across the world, and we will continue to refine and improve the ranking.

But while some in the sector continue to get excited about the latest supposed revelations about the limitations of global rankings, Times Higher Education is quietly getting on with a hugely ambitious project to build an extraordinary and truly unique global resource.

THE is now in the third year of an annual process to collect comprehensive data, under bespoke, clear and globally harmonised definitions, on an ever-widening range of universities across the world – in direct partnership with the institutions themselves.

Last year, working individually with each institution, our data team gathered comprehensive institutional data from 1,313 research-led institutions – gathering many tens of thousands of individual data points covering university staff and student numbers and profiles (including gender and national/international status) and financial data (including total income, research income and industry income), all broken down as much as possible into eight broad subject areas.

The data were combined with about 250,000 data points from more than 20,000 responses to two rounds of our annual Academic Reputation Survey, and an analysis (by Elsevier) of 56 million citations to 11.9 million research publications, including more than half a million books and book chapters, to develop the 2016-17 THE World University Rankings, and derivatives including the THE 150 Under 50, the Asia University Rankings and the Emerging Economies University Rankings.

Data collection for the 2017-18 World University Rankings portfolio is currently under way – as is the 2017 Academic Reputation Survey – and we are confident of expanding the range and depth of our data yet further.

But the database does not just fuel the THE’s range of published rankings. It is now the basis of a range of online analytical tools – DataPoints – which almost 100 universities around the world (including MIT) are now using to help them benchmark their performance against a group of peers, against a wide range of performance metrics, including those used to create the global rankings.

Institutions and academics could continue the endless, backward-looking debate about the rankers’ choice of metrics and metric weightings – or they could move forward and choose their own. They can tailor the underlying rankings data to suit their own needs and missions, and to inform their own strategic priorities.

Since their foundation in 2004, the THE World University Rankings have evolved far, far beyond the simple, controversial and monolithic ranked lists of universities. Online, universities can be ranked separately against five pillars of activities, and they are profiled against a range of additional contextual data. And in our DataPoints tools – where the focus is on profiling and benchmarking not ranking –deeper, richer comparisons are available.

THE has moved well beyond the inherent limitations of rankings to offer, as MIT’s Lydia Snover says, new, data-led insights to deepen our collective understanding of the dynamic world of global higher education and research. 

Phil Baty is editor of the THE World University Rankings.
 

Data collection for the 2017-18 Rankings is underway now. Please note:

We can only include you in THE’s global rankings (the THE World University Rankings, Asia University Rankings, Latin America University Rankings and Emerging Economies University Rankings) if you submit and sign-off data through our secure on-line data portal.

If you would like to submit your institution to the database and be considered for inclusion in the THE’s range of global rankings, please email: profilerankings@timeshighereducation.com

Universities are eligible for inclusion in the 2017-18 THE World University Rankings if they: teach undergraduates; they publish more than 1,000 research papers (indexed by Scopus) over a five year period (between 2012 and 2016); and they have a broad range of activity (no more than 80 per cent of activity exclusively in any single subject area).

Data collection for 2017-18 ends on 30 March 2017.

You've reached your article limit

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 6 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Reader's comments (5)

Phil Baty may regard concerns about the methodology and pernicious impacts of university ranking as “tedious … hand-wringing” and “backward-looking” but I hope he will forgive me for begging to differ. These are serious issues that demand constant attention. And while he may be right to emphasise that the utility of their data-gathering is in providing a spectrum of benchmark indicators rather than overall ranking, every year when these tables are released, the headline stories are about the “overall performance”, an arbitrary aggregate of scores and opinions of individual institutions presented at a level of precision (three significant figures) for which no meaningful justification has ever been articulated. I propose to continue to look broadly – backwards, forwards and all around – at the implications of the activities of university rankers, and to continue this debate. I'm sure Phil is up for it, despite what he writes above! It is the least I can do for colleagues beset by the pressure that numbers endlessly exert to supplant rather than inform judgement.
Good points by scurry. The false precision of these rankings undermines their credibility. Estimates of uncertainties would be expected.
Thanks for your thoughtful contribution (as ever) Stephen. My intention, of course, is not to shut down debate around the role of rankings and their uses and abuses, but to seek to move the conversations forward to explore the powerful, helpful contributions they can make by creating unique new globally comparable data sets and by putting the data in the hands of the user to allow bespoke analyses. I personally will be attending at least 20 detailed face-to-face data "masterclasses" across the world this calendar year with our university stakeholders, to ensure we continue to open our activities and our data to the scrutiny of the university sector and to ensure that what we do continues to add value.
I actually don't have a problem with numerical data being gathered and compiled with regard to every dimension relevant to the academic condition, a set of vital indicators, if you will. And I understand that the data gathering techniques are never perfect and are always subject to improvement. However, what is more disturbing is the endless packaging and repackaging of these data into ever greater number of league tables that salami slice the data in various ways. I imagine that financial considerations drive the THE in this direction. But that's open to an entirely different set of ideologically based objections about which indicators are combined to produce some composite score, etc. One suspects that the hands of the potential clients for these tables are all over the process. Moreover, if you concoct enough of these league tables, everyone will be able to consider themselves a winner at some point!
Steve, let me be absolutely clear: your suggestion that "the hands of the potential clients for these tables are all over the process" is absolutely false and without any foundation. Our integrity is everything and you will have seen, no doubt, that we brought in PwC last year to carry out a full independent audit of our data handling and our calculations for the rankings - precisely because we recognise how much weight is placed on ranking results by government and university governing bodies around the world. Their full report is available via the methodology section of our rankings website. In terms of the "endless packaging and repackaging" I'd suggest that's a good thing. One of the major criticisms of global rankings is that they promote the idea of a single model of excellence and encourage uniformity towards one, predominantly US model of the research university. The proliferation of rankings helps celebrate diversity and to recognise context. For example our new US College Ranking with the Wall Street Journal is focussed on teaching - using a unique student engagement survey of over 100,000 current US students and focussing heavily on graduate outcomes, it presents a very different picture of US universities compared to the research and prestige-focused World University Rankings. We're proud to reflect more of the diversity of global higher education with a growing range of different rankings.

Have your say

Log in or register to post comments

Featured Jobs

Chair (W3) of Architectural Construction and Design

Technische Universitat Dresden (tu Dresden)

Chair (W3) of Structural Design in Architecture

Technische Universitat Dresden (tu Dresden)

Chair (W2) of Architectural Conservation and Design

Technische Universitat Dresden (tu Dresden)
See all jobs

Most Commented

Doctoral study can seem like a 24-7 endeavour, but don't ignore these other opportunities, advise Robert MacIntosh and Kevin O'Gorman

Matthew Brazier illustration (9 February 2017)

How do you defeat Nazis and liars? Focus on the people in earshot, says eminent Holocaust scholar Deborah Lipstadt

Laurel and Hardy sawing a plank of wood

Working with other academics can be tricky so follow some key rules, say Kevin O'Gorman and Robert MacIntosh

Improvement, performance, rankings, success

Phil Baty sets out why the World University Rankings are here to stay – and why that's a good thing

Warwick vice-chancellor Stuart Croft on why his university reluctantly joined the ‘flawed’ teaching excellence framework