16 September 2010
Our six subject tables are now built on a sophisticated range of metrics, rather than opinion
The US dominance of our overall top 200 ranking is reflected in our six subject tables.
An American university sits at the top of every one of our top-50 subject rankings and one institution — Harvard University — tops four of the six tables.
The rankings indicate that Harvard is the best in the world in arts and humanities, social sciences, physical sciences and clinical, pre-clinical and health-related subjects. The California Institute of Technology takes top spot in the engineering and technology table, and the Massachusetts Institute of Technology is best for life sciences.
Steve Woodfield, senior researcher in higher education at the UK's Kingston University, says that the US dominance of the tables is not surprising. In addition to the sector's great wealth, history and extensive international academic links, he says, American institutions have also become very good at understanding their own strengths and developing them.
"Many US universities have sophisticated institutional research capability in which they collect data about their performance in strategic areas for the purpose of institutional improvement," he says. "They have benchmarked their performance on objective criteria and can make the necessary adjustments to improve their ratings."
The 2010-11 subject tables are more comprehensive and sophisticated than anything Times Higher Education has previously published.
The six broad subject areas we use are drawn from the 251 separate subject areas listed by Thomson Reuters' Web of Science citations database.
Under our previous rankings methodology, used between 2004 and 2009, the subject tables were based on just a single indicator — an opinion poll of academics, examining nothing but reputation.
Because the tables were based entirely on the subjective views of those who filled in a survey, they threw up some strange results, placing institutions with big names — but without the research strength to justify their reputations — too high up the tables.
By extension, less well-known research powerhouses were unfairly penalised.
An important feature of the 2010-11 subject tables is that they use the same range of 13 performance indicators as the overall top 200 list. These indicators are brought together under the same five categories as the overall table: teaching — the learning environment; research — volume, income and reputation; citations — research influence; industry income — innovation; and international mix — staff and students.
But while we have used the same indicators, we have adjusted the weightings where appropriate to better suit the specific profiles of the subjects being judged.
For example, in the arts and humanities, citation counts are recognised as a less robust measure of research performance than in other disciplines where journal publication is the most common type of research output and citations correlate very strongly with quality.
So in the arts and humanities table, we have reduced the weighting given to the citations indicator from 32.5 per cent for the overall tables to just 17.5 per cent. Other indicators, where data are more robust, are given more weight — for example in the arts and humanities, more weight is given to the results of our reputational survey.
Weightings for citations are also reduced for the social sciences ranking and for engineering and technology, both subject areas where citations are also seen as a less robust measure of research performance.
As with the overall top 200 table, we have excluded graduate schools that teach no undergraduates and those with a research-publication output of fewer than 50 papers a year (during 2008).
But additional exclusions from the subject tables have been made where it was felt that institutions had an insufficient volume of activity in the relevant subject area, based on the proportion of their overall activity in the area under review.
Where universities were unable to provide detailed data at the subject level on matters such as staff numbers or research income, we have substituted subject-level data for institution-level data