Redrawing ranking rules for clarity, reliability and sense

First round table to improve methodology sets out problems to be resolved. Rebecca Attwood reports

十二月 10, 2009

Governments are swayed by them, universities fall out over them and vice-chancellors have even lost their jobs because of them: there is no doubt that Times Higher Education's World University Rankings have a huge global influence.

A key message from the first THE round table on the future of the rankings was that these far-reaching consequences must be treated with the utmost seriousness.

The event, attended by 15 of the sector's most senior figures, including vice-chancellors and policy experts, was held as the magazine starts to develop a more rigorous and transparent methodology with its new rankings partner, Thomson Reuters.

Rule number one is that rankings must always be clear about what they are measuring.

"You know when you are in a world-class university," said one participant in the discussion, held under Chatham House rules, where participants are not identified in order to ensure frank and open debate. "It looks like one, it feels like one and people behave like it's one."

But for those designing league tables, the difficulty is defining what makes a university "world class" and deciding on suitable measures.

Should the focus be on research or should it try to assess teaching or excellence "in the round"?

A research ranking would alienate many, warned one contributor, proposing that a better aim may be to assess an institution's "intellectual capital".

While it was strongly desirable to include measures of teaching quality, there was widespread concern about how to find reliable indicators in this area - and rule number two was the need for verifiable data.

"If you do focus on research, you lose a lot of the data problems," said one attendee, who argued that the ranking otherwise ran the risk of being seen as "interesting but unreliable".

THE's old methodology used student-staff ratios (SSRs) as a proxy, which, another argued, "tells us nothing" about teaching quality. And whereas in the UK, the Higher Education Statistics Agency collects and publishes data, in other countries SSRs are self-reported.

This had led to a "war" between two unnamed overseas universities, the round table heard, with each accusing the other of submitting falsified information.

Surveys of student satisfaction may not always translate well into other cultures, but one participant said it would be a shame to exclude the UK's high satisfaction levels: "It would be terribly sad, just because it is difficult, to lose the fact that UK students tend to feel very positive about what happens."

Another said that excellence in research was as good a proxy for satisfaction with teaching as any.

There were also arguments that, with no agreement on what makes a world-class university, it was wrong to attempt a single ranking, while another suggestion was that the rankings should be published alongside more contextual data. This could help shift the emphasis away from ranking towards a "research tool".

It was also crucial to be clear about the type of institutions included. "If you are not careful, you could end up with a conservatoire as the best university in the world," one participant said, suggesting different tables for different specialist institutions.

'Goldilocks' principle

Rule number three was the need for transparency when it came to the data and methodology.

Under the old method, "institutions could not access their raw data" and this meant that universities wanting to track improvements "couldn't see how it happened".

However, too much transparency could lead to "game-playing", one participant warned, pointing to a country that had leapt up the table after introducing subsidies for international students: the old methodology used the proportion of overseas students in an institution as an indicator.

But the "most arbitrary" aspect of any ranking was the weighting given to the different elements, argued another.

"If you think something is a good measure, who is to say whether it should be 5 per cent or 25 per cent?" the speaker asked, stressing that the rationale behind final decisions had to be considered very carefully.

The old methodology was heavily weighted towards an academic peer-review survey, with a "tiny" response rate, as one participant warned. There were concerns that this introduced a geographical bias.

According to one attendee, the problem was that "wherever you are, you like the institutions that are near you".

"How can a good university in Indonesia shine?" asked another.

There were also worries about the number of institutions academics should be asked to rank in any opinion survey.

Too few ran the risk of the same institutions recurring, calling into question the data for universities further down the table. Too many and academics' rankings could become less well informed and "very subjective". The danger that academics' judgments were themselves influenced by existing league tables was also raised.

One area of consensus was on citations. The old methodology took no account of subject mix when calculating the number of citations per member of academic staff, penalising institutions with a focus on the arts, humanities or social sciences, such as the London School of Economics.

In the words of one participant, the need to resolve this was a "no-brainer".

Phil Baty, deputy editor of THE and editor of the World University Rankings, who chaired the meeting, said: "Our concern is to produce the most rigorous and transparent rankings, which properly reflect the importance given to them by academics, institutions and governments. I am delighted that we saw such a high level of engagement with our plans from some of the leading figures in the sector.

"We are listening carefully as we move towards the new and improved methodology."

rebecca.attwood@tsleducation.com.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.