Nancy Rothwell

May 12, 2006

This country appears to have a strange and growing fascination with "best-of" lists. Barely a week goes by without a television programme describing the top something or other - films, sit-coms, actors, comedians, even advertisements. They are compulsive viewing, if extremely irritating.

You have to wait until the middle of the night to find out what was voted top, then you feel cheated because it was obvious anyway. Even more annoying is to find that the top three are completely different from your own choice, so they obviously got it wrong.

These TV leagues are normally voted for by the viewers. This makes it harder to question their validity than if they were the product of some mysterious formula. Of course, an important caveat is that those who spend their time voting on such matters may constitute a somewhat strange selection of the population. Apparently, I'm on the Good Housekeeping "top 100" list of inspirational women, as voted for by 16-year-old girls. Dawn French came first. But I don't know any 16-year-old girls who read Good Housekeeping , let alone vote on such matters, so the list's validity is a mystery to me (although I still aspire to move up the table).

One of my favourite lists, never made public but discussed with enthusiasm within some academic circles, was the "top ten male scientists in the UK". The criteria for this much esteemed list had nothing to do with citations, discoveries or other such accolades (though they had to be established scientists to make the preliminary round). Rather it was the sexual appeal of the candidates, as judged by a group of female scientists, that determined the table. The views of many female scientists were taken into account, though I, of course, made no contribution. It would be interesting, some years on, to compare the scientific success of those listed (who are, I think, blissfully unaware of their acclaim), with those who never made it, and indeed to review their eligibility. There may well be a similar list (or many such lists) for women, but I doubt if anyone would be brave enough to admit it.

The Times Higher has a particular fondness for lists and league tables, which inevitably leads to great debate and controversy. Everyone complains about some aspect of them unless they are clearly factual.

Vice-chancellors' salaries are generally indisputable, but there are the "fringe benefits" to argue about. In spite of the moaning, we all still look and compare how our own institution has fared, and of course we select and use the most favourable leagues.

Like TV leagues, those that are voted on in academe tend to be less controversial than others, though the interpretation can still be open to question. The student "satisfaction poll" on the best university last September showed that the Open University had a clear lead. Much was made of the fact that some of our premier higher education institutions, which appear at the top of many other league tables, slipped woefully low. What didn't feature in The Times Higher commentary was the fact that the lowest ranking institution attained a score of 3.5 out of 5 - that is a "first class" mark by most standards - and there was little between the very top and the very bottom. This suggests that, within the rather small sample of voters, British universities fared remarkably well. Of course, saying that all did remarkably well is not as interesting as saying that world leaders in other respects weren't in the top ten.

The debate has already begun about the criteria for league tables to describe relative success at the next research assessment exercise, though it will be more than two years before the results are known. A recent article in Research Fortnight argued that an "Olympic medals" table would be favourite, showing the number of gold (4*), silver (3*) and bronze (2*) outputs achieved. It has also been suggested that some (with too much time to spare) will compare the RAE data with the Higher Education Statistics Agency returns to assess the percentage of staff returned, thus achieving a "real" estimate of the overall quality of each higher education institution. It's not at all clear how this is going to be possible because the categories for Hesa vary significantly from the RAE units of assessment, but I am sure someone will try. Bets are on as to how many ways the same data can be recalculated. Interpretation gets even more contentious with international tables that use a variety of different measures. So it should be possible for every university to select the best one for them and use this in their public relations.

Maybe The Times Higher should solicit readers' views on the "top ten league tables"?JThe UK male scientists one would get my vote. But would it ever get published?

Dame Nancy Rothwell is MRC research professor in the faculty of life sciences at Manchester University.

Please
or
to read this article.

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Sponsored