Change rules to end game-play 2

September 29, 2006

Which is fairer of the following two systems? First, one in which an institution with a department of 100 academics, 99 of whom are worthy of a 1* research rating and one of a 5* rating, chooses to submit just the one outstanding person, and thus ends up with a 5* rating. Another department of 50 staff rated 3* and 50 rated 5* has all its staff submitted, ending up with a 4* rating. And, second, a system in which all staff must be submitted, so the first department ends up with a 1* (I assume) and the second with a 4*.

The fact that submission strategy alone can cause so much variation in outcome suggests that the research assessment exercise is deeply flawed. How can we trust ratings when universities can choose to play such games? Even if institutions simply wish to maximise future research-based income, how can it make these sorts of judgments when it does not know what the funding criteria are going to be?

The current RAE is said to minimise game-playing, yet anyone who believes this is clearly detached from reality. Or perhaps it's all just a cunning ploy to make us embrace metrics.

Trevor Harley
Dundee University

Please login or register to read this article

Register to continue

Get a month's unlimited access to THE content online. Just register and complete your career summary.

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Have your say

Log in or register to post comments