The new Times Higher Education World University Rankings have produced a fantastic level of debate, and some confusion about what the tables mean.
Malcolm Grant, provost of University College London, notes that for all rankings "the basic problem is that there is no definition of the ideal university". So, let me try to make clear what we were doing and how these tables differ.
Rankings summarise complex information for readers who are not themselves experts. They do not directly capture "quality" in teaching, research and the academic environment but use proxies instead: "indicators", not metrics. Our picture is always more or less fuzzy.
In Shanghai Jiao Tong's rankings, the key criteria are about concentrations of research excellence, such as Nobel prizes and numbers of highly cited papers. CWTS, at Leiden, uses structured analyses of publications and citations across subject areas, focusing on one leading aspect of academic activity.
THE wants a more rounded picture of an ideal university that draws on indicators related to the wider environment and a picture that also reflects the argument that quantity is nothing without quality.
What factors to include? Many agencies have tried to assess teaching quality and graduate quality with no satisfactory solution in any country. Degree comparisons are weak across institutions, subjects and even years. So, we are not going to satisfy that need.
Money matters. A university that has $10,000 for every student will be in a position to sustain a richer environment than one with $4,000. Resourcing is a legitimate area of interest for prospective employees, student and parents. Is a university positioned to renew its facilities, and to pump-prime cutting-edge research areas? Resource factors drive ranking changes and those who are cash-strapped suffer in the comparison.
We looked at the relationship between income, capacity and outputs. Size alone does not drive academic achievement. Specialist institutions can support an excellent undergraduate experience and produce world-class research.
How best to scale academic activity? Staff complement shows us how thinly the academic resource is spread across teaching. Similarly we can ask about research income compared to staff, and about the ratio between publications and total academic and research staff.
There are many such indicators and we used some of them, alongside other data, in the basket that feeds the World University Rankings. This drives ranking changes because some institutions with lots of good research are on a similar "relative" level to medium-sized equally productive institutions, very good in what they do but previously less apparent in rankings driven by research concentration. Scaling and resources have tended to shake up the conventional view, but we did not stop with these innovations.
Larger institutions tend to be diverse. Being active in everything increases their overall presence and helps to drive their reputation. Diversity is potentially good for interdisciplinarity but we must make sure that specialist institutions do not suffer by comparison.
Some disciplines are better funded. Publication rates also differ, as do citation cultures. This has not always been accounted for in previous rankings. Grouping our data into subject areas only partly succeeded, because institutions did not have time to gather information. Consequently, our correction factors could not be universally applied.
That's a problem. We drew attention to excellent technology-related institutions compared to the big multi-versities, but did not use enough information to reveal the relative performance of the specialists in the social sciences and in the arts and humanities. We are still building these data and will certainly have a richer picture in future.
Disciplinary diversity is an important factor, as is international diversity. How would you show the emerging excellence of a really good university in a less well known country such as Indonesia? This is where we would be most controversial, and most at risk, in using the logic of field-normalisation to add a small weighting in favour of relatively good institutions in countries with small research communities. Some may feel that we got that one only partially right.
So, these substantial changes to methodology have contributed to changes in the overall patterns of where institutions appear in the World University Rankings. All these data needed to be combined, and to do that we used an indexing system - not the raw values.
This is not a ranking of research achievement. Universities are not at the top of the table because they have more money and produce more papers. Some top universities do have a concentration of excellence, but they also achieve a high standard across their portfolio and make good use of resources at their disposal.
We have consulted widely. We have released more information about our methodology and much more specific data than any other compiler. We are providing feedback to each institution that engaged with us in data development. Where there are concerns, let us know. Where there are unexpected outcomes, engage with us in thinking through any changes to data, indicators and weightings. Perhaps the debate will be as valuable as the outcome.
THE WORLD VIEW
Explore the World University Rankings in depth and find more news and analysis at http://www.timeshighereducation.co.uk/world-university-rankings/
Manipulate the rankings to create a personalised view of global higher education by downloading our iPhone app from the Apple store.
Email your comments, questions and other feedback to email@example.com.