Measured response

June 29, 2007

Yes, there may be problems with the indicators suggested by the Performance Indicators Review Group ("Performance indicators reprieved", June 22). Measures of research activity, widening participation, retention, student employment and so on are limited in value. Their publication and misuse may lead to unintended responses by universities.

But they are the best figures available; thus their use in isolation and with appropriate caveats is perfectly sensible. The big problem lies in how others treat these numbers. The practice of creating league tables by aggregating these very different measures using arbitrary weightings is pseudo-quantification of the worst kind. How can one find an aggregate of A-level points and the staff-to-student ratio, for example? Mixing input figures (such as student entry qualifications) with process figures (such as spending) and outcome figures (such as subsequent employment) is illogical. Trying to create a value-added model with unmoderated acceptances and degree awards is futile. Using statistical techniques on these population figures, such as confidence intervals designed to illustrate sampling variation only, shows a frightening lack of understanding. Of course, the results are largely predictable because, in the end, the tables are dominated by unmeasured factors such as age of institution, benefactions and, ultimately, geography.

Stephen Gorard
York University

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored