How to turn the league tables

March 8, 1996

To celebrate International Women's day today, The Times published a "league table", listing the 100 most powerful women in the world. While the criteria used for ranking potential candidates (political power, financial power and personal influence) were perhaps contentious, the results were hardly earth shattering. It proved difficult to find even 100 women who could be judged powerful on a worldwide scale.

Not surprisingly, the list was dominated by politicians and administrators; only two women scientists made it and no technological high-flyer emerged. Publication of this sort of information may be interesting to the feminist sisterhood and may help sell a few newspapers, but is unlikely to change matters.

However, the use and impact of some other types of league tables is more controversial. This is especially true in education. At a recent Society for Research in Higher Education discussion, John O'Leary, education editor of The Times, explained and defended the publication of his analyses of the universities' comparative performance in a guide to students.

The reaction of the mainly academic audience was predictable: sniffy criticisms of the methodology, in respect of the criteria used, and the aggregation of the results to produce over-simplistic and misleading rankings; together with a general rubbishing of the idea that a Times guide might make a difference to actual choices.

The basic issue - how to make information accessible to a variety of different interests - deserves serious consideration. Universities are not good at communicating to those outside what they are doing and how well they are doing it. As one speaker suggested, league tables are useful in a sporting environment. Competition under well-established and stable rules ensures a hierarchy of subdivisions based on relative performance. In higher education the usefulness of The Times's type of composite league tables, as opposed to the disagregated versions published by The THES is arguably questionable. Universities differ substantially in respect of their missions, the size and makeup of their student population and resource base. The production of simple summations of performance against common criteria such as A-level scores and first-class degrees for institutions as diverse as Oxbridge and the universities of Durham and Derby is more likely to mislead and confuse than to enlighten potential punters.

The usefulness of comparative information about universities' performance should not be dismissed, however. At issue here is the method, not the principle of its provision. Universities have a moral obligation to explain themselves to those who pay for and use their services. The challenge lies in the execution. Thus, a successful student guide implies recognising the complexity of the student population, (since growing numbers of students are mature and not mobile) the diversity of their needs and the fact that choices are rarely rational and often constrained.

The best way of ensuring that the information needs of the various interested groups are being met, is to ask them. Comparisons are more tricky, although the barrier to ranking institutions by "families" is a political rather than a methodological one. As the football league tables demonstrate, such rankings provide a powerful incentive to improve and excel. The thirst for such tables may be a peculiarly British phenomenon but it is unlikely to be stemmed by an attitude of lofty dismissal: arguably, we should practice the arts of the media manipulators and publish the authorised version.

Diana Green is pro vice chancellor, University of Central England.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored