A recent analysis of staff selected for inclusion in last year’s research excellence framework made predictably depressing reading.
According to the Higher Education Funding Council for England’s Selection of Staff for Inclusion report in the REF 2014, the proportion of women submitted to REF 2014 rose to 51 per cent of the eligible pool of female staff, 3 per cent higher than in the 2008 research assessment exercise, but well below the proportion of eligible men selected (67 per cent).
Selection rates among staff with a disability were lower than for staff with none, while black and Asian UK and non-European Union nationals were significantly less likely to be submitted than staff from other ethnic groups.
While these figures doubtless reflect wider failings regarding diversity and inclusion in the sector – which many universities, including my own, are actively attempting to address – they also shine a light on the problematic nature of staff selection for the REF. These problems persist despite efforts to make the 2014 criteria for inclusion fairer. Surely it is time to reconsider the rules that allow universities to be selective about the staff that they submit?
The current system incentivises “game playing” as institutions try to achieve – through what is, in essence, guesswork – an optimal balance between the volume of staff submitted and the quality of the group’s work. And despite universities having to publish “open and transparent” criteria for REF inclusion, the rules on impact appear to have exacerbated game playing. As Times Higher Education reported earlier this year, an analysis of the REF results by Tim Horne of Coventry University revealed that “the number of submissions containing staff numbers just below the threshold for an extra case study was far higher than would be expected statistically”.
Staff selection also adds unnecessary stress and potentially damages the careers of those who are not selected for submission, or who fear that they may not be, thus being potentially divisive and damaging staff morale within academic units.
The process increases the administrative burden for universities and prevents a fully objective assessment of the quality of research being carried out in each institution by all REF-eligible research-active staff.
Furthermore, the published outcomes run a serious risk of misleading stakeholders and, in doing so, undermining the credibility of REF results. In particular, data about the “research environment” derived from the Higher Education Statistics Agency misrepresent a unit’s genuine degree of vitality. Important figures, such as research income and PhD student numbers, were presented to the REF panels as a Hesa total divided by the number of submitted full-time equivalent staff. However, that Hesa total represented the activity of both submitted and non-submitted staff. This allowed institutions to receive credit for students and grants even when their supervisors and investigators had not been submitted to the REF, and made some rather average units appear exceptional.
This is compounded by the presentation of the final results as quality profiles, a format that unnecessarily clouds the data available to prospective students. Consider two submissions to the same panel, each judged to have a similar quality profile, but one of which represents the submission of all an institution’s eligible staff, while the other represents only half of its staff. Presenting the research profiles of these two units as “equally excellent” and differing only in scale is simply misleading.
In 2014, of course, Hesa did publish figures on the total number of REF-eligible full-time equivalent staff at each higher education institution, but it did so separately on the day of the REF results (meaning that THE could not include the results in its initial print analysis) and apparently with no expectation from Hefce that those presenting the outcomes would consult the data. In addition, it seems possible for institutions to indulge in more game playing on this front, by moving, for Hesa data purposes, some non-submitted staff into units of assessment that the institution did not submit in the REF. This tactic makes an institution’s submission rate for a specific unit of assessment appear higher than it really is. Thus, even the inclusion of the percentage of research-active staff submitted to a UoA is liable to misleading manipulation.
What is the standard Hefce response to the idea of removing staff selection? That the role of the REF is to identify where excellent research is occurring, no matter how much “non-excellent” research there is. If this is the intention, it is unclear why the decision to present the results through percentage profiles rather than staff volumes at different quality levels is appropriate.
One does not need to be a critic of research assessment to feel that staff selection has become a process with conspicuous potential for discrimination – to the harm of many individuals, and also to the vitality of the research system as a whole. It is unclear whose interests are served by having a selective process, but clear that harm is being risked to many. For simplicity and fairness, let us have a system in which staff submissions to the REF and Hesa data are comparable, and let us stop playing games with data and with careers.
David Price is vice-provost (research) at University College London.