John Gill asserts that in their research excellence framework preparations, “game-players” can “base the number of researchers submitted on the number of solid impact case studies they can detail - rather than, as was intended, the other way around” (Leader, 17 October). This may be how some higher education institutions are interpreting the rules, but if so they are playing a high-stakes game for little reward.
All institutions have been required to produce for approval a detailed REF code of practice explaining the criteria they would employ in selecting staff for the exercise. The guidance from both the REF team at the Higher Education Funding Council for England and the Equality Challenge Unit on drafting the codes made it clear that staff could be excluded only on the grounds of quality, volume and fit with the unit of assessment in which their submission was proposed; Hefce wouldn’t have approved a code that said anything else.
It follows that if staff meet the criteria they cannot be excluded, even where this means the unit of assessment requires an additional case study, as there is no objective way to select one individual over another. Any university excluding submittable staff because they have insufficient impact case studies must be contravening its code and therefore risks being thrown out of the REF by its funding council.
And for what? To avoid an “unclassified” result for the missing case study? For a unit of assessment with two case studies, each is worth 8 per cent of the overall score; for a unit requiring three, each is worth 5.33 per cent, and so on. It hardly seems worth the risk.
Anglia Ruskin University