IMAGINE two university departments, each with six research active staff. The first contains three researchers of international excellence, and three who are young and full of promise. The second boasts three complete woodentops with no credible research to their name, and three worthy hacks squeezing out a little dull work of national quality. Using the conventional scoring system, the first has three individuals at 7 and three at 3, an average of 4.5. The second has three at 3 and three at 1, an average of 2.
How does the difference between 4.5 and 2 show up under RAE criteria? The answer is that it does not. The failing department gains its 2, and the brilliant department scores the same. Maybe the Higher Education Funding Council for England has good political reasons for lumping two departments together when their averages are so far apart. But can the THES have any excuse for multiplying RAE scores by numbers of staff and presenting the results as a measure of "overall performance"? I dare say there are other ways of looking at it, but it seems like dire innumeracy prostrated before cynical manipulation.
JONATHAN REE Middlesex University London