The essential elements in our world-leading formula
Underpinning the World University Rankings 2013-2014 is a sophisticated exercise in information-gathering and analysis: here we detail the criteria used to assess the global academy's greatest universities
The Times Higher Education World University Rankings are the only global university performance tables to judge research-led universities across all their core missions - teaching, research, knowledge transfer and international outlook.
We employ 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, which are trusted by students, academics, university leaders, industry and governments.
The methodology for the 2013-2014 World University Rankings is identical to that used since 2011-2012, offering a year-on-year comparison based on true performance rather than methodological change.
Our 13 performance indicators are grouped into five areas:
- Teaching: the learning environment (worth 30 per cent of the overall ranking score)
- Research: volume, income and reputation (worth 30 per cent)
- Citations: research influence (worth 30 per cent)
- Industry income: innovation (worth 2.5 per cent)
- International outlook: staff, students and research (worth 7.5 per cent).
Universities are excluded from the Times Higher Education World University Rankings if they do not teach undergraduates; if they teach only a single narrow subject; or if their research output amounted to fewer than 1,000 articles between 2008 and 2012 (200 a year).
In some exceptional cases, institutions that are below the 200-paper threshold are included if they have a particular focus on disciplines with generally low publication volumes, such as engineering or the arts and humanities.
Further exceptions to the threshold are made for the six specialist subject tables.
To calculate the overall rankings, "Z-scores" were created for all data sets except for the results of the academic reputation survey.
The calculation of Z-scores standardises the different data types on a common scale and allows fair comparisons between different types of data - essential when combining diverse information into a single ranking.
Each data point is given a score based on its distance from the mean average of the entire data set, where the scale is the standard deviation of the data set.
The Z-score is then turned into a "cumulative probability score" to arrive at the final totals.
If University X has a cumulative probability score of 98, for example, then a random institution from the same data distribution will fall below the institution 98 per cent of the time.
For the results of the reputation survey, the data are highly skewed in favour of a small number of institutions at the top of the rankings, so last year we added an exponential component to increase differentiation between institutions lower down the scale, a method we have retained for the 2013-2014 tables.
Institutions provide and sign off their institutional data for use in the rankings.
On the rare occasions when a particular data point is missing - which affects only low-weighted indicators such as industrial income - we enter a low estimate between the average value of the indicators and the lowest value reported: the 25th percentile of the other indicators.
By doing this, we avoid penalising an institution too harshly with a "zero" value for data that it overlooks or does not provide, but we do not reward it for withholding them.
International outlook: People, research (7.5%)
This category looks at diversity on campus and to what degree academics collaborate with international colleagues on research projects - both signs of how global an institution is in its outlook.
The ability of a university to attract undergraduates and postgraduates from all over the planet is key to its success on the world stage: this factor is measured by the ratio of international to domestic students and is worth 2.5 per cent of the overall score.
The top universities also compete for the best faculty from around the globe. So in this category we adopt a 2.5 per cent weighting for the ratio of international to domestic staff.
In the third international indicator, we calculate the proportion of a university's total research journal publications that have at least one international co-author and reward higher volumes.
This indicator, which is also worth 2.5 per cent, is normalised to account for a university's subject mix and uses the same five-year window as the "Citations: research influence" category.
Research: Volume, income, reputation (30%)
This category is made up of three indicators. The most prominent, given a weighting of 18 per cent, looks at a university's reputation for research excellence among its peers, based on the 10,000-plus responses to our annual academic reputation survey.
This category also looks at university research income, scaled against staff numbers and normalised for purchasing-power parity.
This is a controversial indicator because it can be influenced by national policy and economic circumstances.
But income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure.
This indicator is fully normalised to take account of each university's distinct subject profile, reflecting the fact that research grants in science subjects are often bigger than those awarded for the highest- quality social science, arts and humanities research. It is given a weighting of 6 per cent.
The research environment category also includes a simple measure of research productivity - research output scaled against staff numbers.
We count the number of papers published in the academic journals indexed by Thomson Reuters per academic, scaled for a university's total size and also normalised for subject. This gives an idea of an institution's ability to get papers published in quality peer-reviewed journals.
This indicator is worth 6 per cent overall.
Citations: Research influence (30%)
Our research influence indicator is the flagship. Weighted at 30 per cent of the overall score, it is the single most influential of the 13 indicators, and looks at the role of universities in spreading new knowledge and ideas.
We examine research influence by capturing the number of times a university's published work is cited by scholars globally. This year, our data supplier Thomson Reuters examined more than 50 million citations to 6 million journal articles, published over five years. The data are drawn from the 12,000 academic journals indexed by Thomson Reuters' Web of Science database and include all indexed journals published between 2007 and 2011 (with citations to papers through 2012).
Citations to these papers made in the six years from 2008 to 2013 are also collected.
The citations help show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to push further the boundaries of our collective understanding, irrespective of discipline.
The data are fully normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.
We exclude from the rankings any institution that publishes fewer than 200 papers a year to ensure that we have enough data to make statistically valid comparisons.
Industry income: Innovation (2.5%)
A university's ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy.
This category seeks to capture such "knowledge transfer" by looking at how much research income an institution earns from industry, scaled against the number of academic staff it employs.
"Industry income: innovation" suggests the extent to which businesses are willing to pay for research and a university's ability to attract funding in the competitive commercial marketplace - useful indicators of institutional quality.
The category is worth 2.5 per cent of the overall ranking score.
Teaching: The learning environment (30%)
This category employs five separate performance indicators designed to provide a clear sense of the teaching and learning environment of each institution from both the student and the academic perspective.
The dominant indicator here uses the results of the world's largest invitation-only academic reputation survey.
Thomson Reuters carried out its latest reputation survey - a worldwide poll of experienced scholars - in spring 2013.
It examined the perceived prestige of institutions in both research and teaching. There were just over 10,000 responses, statistically representative of global higher education's geographical and subject mix.
The results of the survey with regard to teaching make up 15 per cent of the overall rankings score.
The teaching and learning category also employs a staff-to-student ratio (an institution's total student numbers) as a simple (and admittedly crude) proxy for teaching quality.
The proxy suggests that where there is a healthy ratio of students to staff, the former will get the personal attention they require from the institution's faculty.
This measure is worth 4.5 per cent of the overall ranking score.
The teaching category also examines the ratio of doctoral to bachelor's degrees awarded by each institution.
We believe that institutions with a high density of research students are more knowledge-intensive and that the presence of an active postgraduate community is a marker of a research-led teaching environment valued by undergraduates and postgraduates alike.
The doctorate-to-bachelor's ratio is worth 2.25 per cent of the overall ranking score.
The teaching category also uses data on the number of doctorates awarded by an institution, scaled against its size as measured by the number of academic staff it employs.
As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is thus attractive to graduates and effective at developing them.
Undergraduates also tend to value working in a rich environment that includes postgraduates. This indicator is normalised to take account of a university's unique subject mix, reflecting the different volume of doctoral awards in different disciplines, and makes up 6 per cent of overall scores.
The final indicator in the category is a simple measure of institutional income scaled against academic staff numbers.
This figure, adjusted for purchasing-power parity so that all nations may compete on a level playing field, indicates the general status of an institution and gives a broad sense of the infrastructure and facilities available to students and staff. This measure is worth 2.25 per cent overall.
The subject tables employ the same range of 13 performance indicators used in the overall World University Rankings, brought together with scores provided under five categories:
- Teaching: the learning environment
- Research: volume, income and reputation
- Citations: research influence
- International outlook: staff, students and research
- Industry income: innovation.
Here, the overall methodology is carefully recalibrated for each subject, with the weightings changed to best suit the individual fields. In particular, those given to the research indicators have been altered to fit more closely the research culture in each subject, reflecting different publication habits: in the arts and humanities, for instance, where the range of outputs extends well beyond peer-reviewed journals, we give less weight to paper citations.
Accordingly, the weight given to “citations: research influence” is halved from 30 per cent in the overall rankings to just 15 per cent for the arts and humanities.
More weight is given to other research indicators, including the academic reputation survey.
For social sciences, where there is also less faith in the strength of citations alone as an indicator of research excellence, the measure’s weighting is reduced to 25 per cent.
By the same token, in those subjects where the vast majority of research outputs come through journal articles and where there are high levels of confidence in the strength of citations data, we have increased the weighting given to the research influence (up to 35 per cent for the physical and life sciences and for the clinical, pre-clinical and health tables).
A breakdown of the methodology for each subject is provided on each subject page.
No institution can be included in the overall World University Rankings unless it has published a minimum of 200 research papers a year over the five years we examine.
But for the six subject tables, the threshold drops to 100 papers a year for subjects that generate a high volume of publications and 50 a year in subjects such as social sciences where the volume tends to be lower.
Although we apply some editorial discretion, we generally expect an institution to have at least 10 per cent of its staff working in the relevant discipline in order to include it in the subject table.
The majority of institutions in Thomson Reuters’ Global Institutional Profiles database, which fuels the rankings, provide detailed subject-level information. In rare cases where such data are not supplied, institutions are either excluded or public sources are used to inform estimates.