The research assessment exercise has taken a few steps forward and a few steps back, ("All work to be counted in RAE evaluation", October 12). In evaluating and rewarding the research performance of universities department by department, future RAEs (after 2008) will no longer assess only four selected papers per researcher among those researchers selected for inclusion. Instead, all papers by all departmental researchers will be assessed. That is a step forward.
The fact that the assessment will be in terms of objective metrics, not just in terms of panel review, and that those metrics will be multiple rather than reliant on a single metric, are also steps forward, as is the fact that the new system will apply at least to science, technology and engineering.
What is less welcome is that the new system may only apply to science, technology and engineering. And the proposal that the metrics under consideration may be only three picked a priori - prior research income, postgraduate numbers and the "impact factor" (the average number of citations of the journal in which each article is published) - is most certainly a step backward.
Prior research income, if given too much weight, becomes a self-fulfilling prophecy and reduces the RAE to a multiplication factor on competitive research funding. The result would be that instead of the current two autonomous components in the dual-support system there would only be one: RCUK multiplied by the RAE metric rank, dominated by prior funding.
To counterbalance this, a rich spectrum of potential metrics - not only three - needs to be tested in the 2008 RAE.
These include citation metrics for each article itself (rather than just its journal's average), download metrics, citation and download growth curve metrics, co-citation metrics, hub/authority metrics, endogamy/interdisciplinarity metrics, book citation metrics, web link metrics, comment tag metrics, course-pack metrics, and many more.
All these metrics should be tested and validated against the panel rankings in RAE 2008 in a multiple regression equation. The selection and weighting of each metric should be adjusted, discipline by discipline, rationally and empirically rather than a priori as is being proposed now.
Stevan Harnad, Professor of cognitive science, Southampton University