FEW people in British universities are comfortable with league tables published by newspapers. As they proliferate, anxiety has sharpened over the newspapers' role in compiling such guides and their influence on the choices of would-be students.
Accountability and transparency mean that the data collected by the funding councils and other agencies cannot remain obscure management tools for the specialist administrator. Nor can that data now be dismissed as inadequate or unreliable since the Higher Education Statistics Agency collects information on a uniform basis across the whole sector. That data can and should be used to inform the public.
Furthermore, while there is suspicion and distaste at such blunt instruments being wielded on the tender plant of Britain's university system, the increasingly competitive marketers employed by institutions pounce eagerly on any league table which shows their institution in a good light.
Most disliked are the super-leagues that lump a whole range of indicators together to produce a single ranking. In doing this, newspapers are accused of perpetuating an outdated vision of Britain's universities by skewing their super-leagues so that the collegiate universities with strong research roles soar to the top while universities, particularly the former polytechnics, whose strength has always been in teaching, gravitate to the bottom.
Some add a charge of deliberate conspiracy - an attempt to weaken those lower-ranked institutions' claim to public funds. Others despair at the stranglehold of the elite institutions on the country's opinion formers.
as Leslie Wagner, vice-chancellor of Leeds Metropolitan University, argued at a Committee of Vice-Chancellors and Principals' conference on league tables this week, both groups see league tables as a "corrosive" force, failing those universities with a more diverse role and misleading would-be students and their parents as to the value of different institutions.
There is no conspiracy - only movement into a commercial vacuum by newspapers aware of consumers' thirst for accessible pointers. But dislike of these single tables is understandable. As Diana Warwick, chief executive of the CVCP, said, there are objections to the selection of indicators and the weightings attached to them.
In particular the influence of research pervades even those indicators that are supposed to represent diversity. The data act as a mirror to a system that systematically values research over teaching or other worthy activities.
Consolidated league tables do not reflect diversity adequately. There is, for example, a strong case for a value added table. On its own it would turn existing hierarchies on their head though it would have little impact on a consolidated table compiled from a dozen criteria.
There is growing acceptance that newspapers will compile rankings whatever universities say or do, and that they will range from the serious (as in this week's THES ranking on women professors) to the frivolous (Redmole's "totty" factor). There is also recognition that tuition fees may alter the criteria by which students select their courses.
Students may or may not value a university's commitment to its local economy more highly than its research ranking or the amount it spends on library and computing facilities or, in a new indicator included for the first time in the 1998 Times/Thes tables, on student services. It is their choice. They will be best served by the existence of many rankings.
Universities would do better to work on producing tables that demonstrate their strengths rather than trying to suppress the kind of comparisons to which all public service providers are increasingly subject. Hierarchies of esteem have always been with us. Publicly-available rankings using authenticated data are infinitely preferable to the old informal rankings perpetuated by word of mouth, based on inaccurate and out-of-date information, ignorance and downright prejudice.