Our 2020 World University Rankings will once more be our largest yet. This is true both in terms of the number of universities ranked – with just under 1,400 institutions ranked – and in terms of the number of countries represented, with institutions from Brunei, Malta, Vietnam, Cuba and Montenegro entering the table for the first time.
As with last year, there are no changes to the underlying methodology of our World University Rankings. The five broad areas that we report on remain the same: teaching environment, research environment, citation impact, industry income and international outlook. This table retains its position as our flagship ranking with a focus on research-intensive universities across the world.
One thing that is not always obvious from the outside is the amount of work that our data team put into confirming figures with universities, and checking data against previous entries and other sources.
As always, we have worked hard with Elsevier to ensure that papers, journal articles, article reviews, conference proceedings, books and book chapters are correctly attributed to universities. These steps, combined with the fact that an increasing volume of research is being produced across the world, mean that more universities than ever are meeting our 1,000-paper threshold. It’s likely that this number will continue to grow over the coming years.
Elsevier continues to curate the Scopus dataset that we use for bibliometric measures. This has resulted in better matching of institutions to their research and in an enhanced list of suspended titles – ones that we no longer include in our calculations. The expanded number of suspended titles has resulted in a few universities dropping below our threshold for inclusion, but we believe that it is important that we reflect the quality as well as the quantity of publications when making our assessments.
On top of this, as part of our commitment to continually improving data quality, we are the only rankings system to have our tables verified by an external, independent and professional organisation. The team at PricewaterhouseCoopers have once more worked with us on this, ensuring that our results are as reliable as possible.
But while little has changed in regard to the creation of our flagship World University Rankings, we have made significant adjustments in other areas of our rankings portfolio.
University Impact Rankings
A major evolution for Times Higher Education this year was the launch of our innovative University Impact Rankings in April. This used the United Nations’ Sustainable Development Goals – the set of aspirations for how we can build a more equitable and sustainable world – as a framework for exploring the wider role of universities in society.
More than 550 universities agreed to participate in the first iteration of the rankings – where we explored 11 of the 17 SDGs.
As well as measuring how universities are making a direct impact on society through research, outreach and stewardship, we are hopeful that this new approach can spur institutions on in their efforts to improve their sustainability and can incentivise best practice. There are some encouraging signs that this may already be happening.
The design of the impact ranking differs significantly from our World University Rankings and our series of teaching-focused rankings. We chose to ask for evidence of behaviour, as well as raw numbers, and this has brought in a wealth of fantastic examples of best practice within institutions.
For next year, we will be expanding the ranking to cover all 17 SDGs and bringing in additional measures to look at the role of teaching in supporting the SDGs.
Our teaching rankings continue to be successful, now covering 19 countries and more than 1,400 institutions.
A core element of these rankings is listening to the students and we are pleased that in Europe we have reached more students than ever before: over 120,000 students responded to our European Student Survey. In addition, we learned from the experience of our Japan University Rankings and included a metric on outbound student mobility, using data from the excellent Erasmus+ programme. This year’s Japan ranking included metrics on the number of students in various types of international exchange programmes and, for the second time, the number of courses taught in a language other than Japanese.
Meanwhile, in the US, we have listened to the debate on student debt with interest and have adapted our measurement of university outputs to reflect the median debt that students accrue during college.
Our plans for the future development of the World University Rankings, including the launch of a new methodology, have developed across the past year and we will be releasing a firm framework for consultation with interested bodies this week at our World Academic Summit at ETH Zurich.
Over the following year, we will listen to our stakeholders and we plan to reveal the final definition of the new methodology at our World Academic Summit at the University of Toronto in September 2020. Data collection would then begin in the autumn, with the new version of the rankings earmarked for release in the second half of 2021.
Since opening the discussion last year, we have received a lot of positive feedback and we continue to welcome input, both on the methodology itself and our presentation of the rankings.
It is clear that there is a desire for us to minimise the number of changes to the data collection process. We also acknowledge that there will be a tendency for universities to want to move things in a direction that best reflects their own specific strengths.
So how have our thoughts progressed over the year?
We have already announced that we are exploring enhanced bibliometric measures for the future. The citations measure, which represents 30 per cent of the rankings, is currently derived from the so-called “snowball metric” of field-weighted citation impact (FWCI). This measure was developed as a result of collaboration between a number of leading universities and industry with the intention of making it possible to compare citation performance across research fields with very different publication traditions.
Although it has served well, over the years we have made some adjustments to the way we use it: first to account for differences in the language of papers, and second to accommodate the challenge of papers with many authors (so-called kilo-author papers). Neither change has been ideal, so we have sought out other options.
We are now considering using the same base for calculating the field-weighted citation impact of a paper, but rather than taking the average value of papers authored by a university, we will look to take the score at the 75th percentile. We cannot take the median because this is normally zero.
We believe that this will provide a more consistent understanding of citation performance, and will diminish some of the edge effects we see when individual articles have exceptionally high citation performances.
We have explored other citation-based metrics, informed extensively by our use of a wider set of measures in the impact rankings, including the proportion of papers in the top 10 per cent of publications and the number of downloads. Of these, I think it is reasonable to look at the proportion of most-cited publications as a candidate metric; if FWCI looks at the typical paper from a university, this examines the strongest.
Another interesting feature of the impact rankings’ bibliometrics is their use of a keyword search to define the fields of publication. This cuts across the conventional, hierarchical definition of subjects and gives additional emphasis to cross-disciplinary research. Given the worldwide focus on sustainability there is a strong case for including this as a way of understanding how research-focused universities are contributing to research on the SDGs within the World University Rankings.
Do you have ideas about how we can improve our rankings? Send suggestions and questions to us at email@example.com
Duncan Ross is chief data officer at Times Higher Education