RAE reaction: Fair attempt to achieve the impossible

Ray J. Paul, a member of two subpanels in RAE 2008, says funding should be less elitist than it was in 2001 but argues that the use of peer review needs careful consideration if it is to be used in future assessment exercises

十二月 30, 2008

What does the research assessment exercise 2008 really show us? First, everyone is a winner. Second, there are questions over whether the degree of discrimination is adequate for a policy of selective funding. And third, was that peer review?

The extracts below are from the web pages of higher education institutions.

From the Arts Institute at Bournemouth: “70 per cent of its submission to art and design was classified as being of national significance and a further 20 per cent, of international significance/excellence.”

Brunel University: “Staff deemed to be of international standing has tripled to 82 per cent” and “10 per cent [were] classed as world-leading”.

London School of Economics: “The LSE has the highest percentage of world-leading research.” It is “equal second when ranked using a grade-point average” and “first when universities are ranked according to the percentage of 4*[research]”. “Two thirds of our staff work in departments ranked in the top five in the country, and 36 per cent [work in departments that are ranked] first in the country.”

One of the issues that RAE 2008 was supposed to address was that of selective funding. In 2001, 48,022 staff were submitted of which 20 per cent got a 3a or below and were excluded from the funding allocations (except for a small fund set aside for 3as). Of the 38,400 staff who received funding, nearly 9,000 got a 5* rating and 17,260 a 5, meaning 55 per cent achieved a 5 or 5*, the top two grades. The basis for changing the way the RAE worked in the 2008 exercise was to provide the degree of discrimination required by a policy of selective funding. You may have thought that this was also meant to give more money to the top research institutions.

Translating the RAE 2008 profiles from percentage research activity to full-time equivalent staff from all of the submissions shows the number getting the top two grades, either 4* or 3*, as 55 per cent again.

But there is a difference. In 2001, some 80 per cent (or 38,400) of staff submitted received quality-related funding. In 2008, clearly any submission with a 4* entry should be funded (how can world-class research not be supported?). If we remove all submissions with no 4*s, this amounts to about 3,530 staff, leaving nearly 49,000 who will receive funding compared with 38,400 in 2001. Moneys will go to more institutions lower down the Table of Excellence than in 2001 because so many submissions have at least 5 per cent of their research achieving a 4*rating.

So, there is no change in the percentage getting the top two ratings and more people to fund with the same money. It looks as though funding will be more equitable than elitist this time.

In 2001, the 48,022 staff who were entered in the RAE were about two thirds of all higher education staff. So the top two grades did not equate to 55 per cent of all staff, only to those submitted. The total percentage of all staff getting the top two grades out of seven was only 34 per cent, a reasonable division between the grades if you expect quality improvement. So, in that sense, the redesign of the exercise in 2008 was unnecessary.

The issue of the use of peer review is also worth examining. In 2001, research outputs were selectively examined because the quality assessment was seeking a single-point score: the only difficulty was at the boundaries between ratings. In 2008, assessment required the creation of profiles with five levels. It was realised that with a finer mesh and more boundaries, more reading would be required. In the end, many panels, as far as one can tell, read almost everything. This was a monumental task for the larger panels. And at the end of such activity, the output scores would be placed in the appropriate box and the boxes added up. And then – nothing.

How could you take a macro view of the outputs when each had been independently assessed? This is why the results show that the range of profile averages is in general very narrow, with nearly 80 per cent of the units of assessment using a range of less than two out of four. The overwhelming scores for outputs are clearly two or three. Where is the overall peer judgment? A world-leading centre deserves a 4* I would have thought, even if bits of it were assessed at 3* or less.

It seems to me that this method of assessment is similar to assessing the quality of a lawn by measuring the length of each blade of grass – you get a measure, but of what? Perhaps the use of peer review needs more careful consideration in future.

In conclusion, I have to remind readers that I have already made it clear that measuring research quality cannot be done for reasons given in “Measuring research quality: the United Kingdom Government’s research assessment exercise” (2008), European Journal of Information Systems, Vol. 17, 324-329 and available free at http://www.palgrave-journals.com/ejis/index.html. This was partly discussed in Times Higher Education on 11 September 2008 (www.timeshighereducation.co.uk/News-and-Analysis/RAE-table-will-be-shaken-by-use-of-journal-rankings/403502.article

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.