There are two major statistical problems in constructing league tables of research excellence using the research assessment exercise results ( THES , December 14).
The first is in assigning numerical values to the ordinal RAE grades so that averages may be calculated for each institution. The pragmatic and near-universal practice of assigning the numerical values 1 to 7 to the RAE grades appears to be largely uncontentious.
The second problem is in allowing for variations between institutions in the proportion of research-active staff. One approach is to assume that those not selected by their institutions are not research active and to assign them an RAE score of zero. Including this category in the calculations gives a measure of overall research excellence for each institution.
The only argument we can see against this approach is that it might be too harsh as some of those not selected for assessment were omitted for strategic reasons and are actually research active. On the other hand, it is hard to justify a score above zero for staff not declared research active and not subjected to the rigorous RAE assessment process.
The approach adopted by The THES is to construct a league table of excellence solely on RAE grades, ignoring the wide variation in the proportion of staff declared research active. The problem with this approach is that the resulting table does not rank institutional research excellence unless one makes the implausible assumption that staff omitted have, on average for each institution, the same level of research excellence as those selected. For example, an institution in which 90 per cent of staff were submitted as research active, all of whom were in grade 5 departments, would be ranked the same as an institution in which just 10 per cent of staff were submitted, all of whom were also in grade 5 departments.
Does The THES really believe that these institutions should have the same rank in a league table of excellence?