Measured response

六月 29, 2007

Yes, there may be problems with the indicators suggested by the Performance Indicators Review Group ("Performance indicators reprieved", June 22). Measures of research activity, widening participation, retention, student employment and so on are limited in value. Their publication and misuse may lead to unintended responses by universities.

But they are the best figures available; thus their use in isolation and with appropriate caveats is perfectly sensible. The big problem lies in how others treat these numbers. The practice of creating league tables by aggregating these very different measures using arbitrary weightings is pseudo-quantification of the worst kind. How can one find an aggregate of A-level points and the staff-to-student ratio, for example? Mixing input figures (such as student entry qualifications) with process figures (such as spending) and outcome figures (such as subsequent employment) is illogical. Trying to create a value-added model with unmoderated acceptances and degree awards is futile. Using statistical techniques on these population figures, such as confidence intervals designed to illustrate sampling variation only, shows a frightening lack of understanding. Of course, the results are largely predictable because, in the end, the tables are dominated by unmeasured factors such as age of institution, benefactions and, ultimately, geography.

Stephen Gorard
York University

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.