Which is fairer of the following two systems? First, one in which an institution with a department of 100 academics, 99 of whom are worthy of a 1* research rating and one of a 5* rating, chooses to submit just the one outstanding person, and thus ends up with a 5* rating. Another department of 50 staff rated 3* and 50 rated 5* has all its staff submitted, ending up with a 4* rating. And, second, a system in which all staff must be submitted, so the first department ends up with a 1* (I assume) and the second with a 4*.
The fact that submission strategy alone can cause so much variation in outcome suggests that the research assessment exercise is deeply flawed. How can we trust ratings when universities can choose to play such games? Even if institutions simply wish to maximise future research-based income, how can it make these sorts of judgments when it does not know what the funding criteria are going to be?
The current RAE is said to minimise game-playing, yet anyone who believes this is clearly detached from reality. Or perhaps it's all just a cunning ploy to make us embrace metrics.