Much time and effort has gone into generating the performance indicators published this week (pages i-xii). The result is a compromise between a government eager to extend accountability by league table from schools, social services, hospitals and the police to universities and colleges that are all too aware of the dangers of simplistic measurements. The absence so far of any employability indicator, so dear to the Department for Education and Employment's heart, is one indication of this battle.
The outcome is a set of indicators so complicated that they will, as the devisers doubtless hope, make it almost impossible for anyone to use them to create unitary league tables, the bane of a university system that is becoming increasingly diverse.
So what are these performance indicators for?
First, they provide a rich mine of information that will help managers see how their institution's performance stacks up against comparable institutions. The benchmarking system means, for example, that high-prestige universities setting high entry grades are not damned for failing to take in as many students from under-privileged backgrounds as universities with more open entry. And those with more open entry are not expected to graduate so high a proportion.
This, in true 1066 And All That style, is trial by peers who will understand. While it seems to let some of the classier places off the hook in terms of improving participation, it also helps to mitigate the self-cancelling effect of some of these indicators: enrolling more poorly qualified students means a lower graduation benchmark. As with school league tables, publishing the figures is likely to have the effect of levering up the average. It is also likely to trigger similar charges of lowering standards to improve scores.
Second, the indicators provide the interested public with information on what kind of place a university is: whether it is dominated by privately educated students; whether its state school intake is largely from higher social groups; what proportion of its students are older or part-time; what the chances of success are for those who enrol; how efficient its research training is.
Third, and more contentiously, the indicators will provide a basis for selective distribution of money earmarked for such purposes as increasing access for under-privileged students.
But, above all, these indicators, derived from 1996-98 figures, provide a basis for driving forward the government's agenda for universities. This raises the question of whether these are the right indicators by which to judge universities and that in turn raises the question of what universities are for. The education department, whose indicators these in effect are, seems to see universities mainly as agents for social inclusion. Hence the emphasis on attracting and keeping students from under-represented groups and deprived neighbourhoods.
A government policy of opening up opportunities for those who have suffered earlier disadvantages enjoys widespread support in universities. Indeed there is much concern that the government's abolition of grants is having the opposite effect. But opening access to universities is a different matter from judging their overall performance by their success in delivering what is essentially a social agenda.
Universities are about advanced learning, scholarship and research. They are necessarily selective and meritocratic. The present set of indicators largely ignores these broader academic purposes. They should be treated as what they are: a measure of how well universities are meeting the government's requirements. They should not be regarded as a measure of universities' general performance as centres of academic excellence.