THES reporters look at the new Roberts report on the assessment of research
Peer review remains at the centre of research assessment under the proposals published this week by UK funding councils.
Sir Gareth Roberts, president of Wolfson College, Oxford, has reviewed how research should be judged, after the 2001 research assessment exercise. His 100-page report contains 16 recommendations, and it marks what he has described as a radical change. Ten recommendations refer to fundamental aspects of the research assessment process and are being consulted on by the Higher Education Funding Council for England and other funding bodies.
The first is that the judgement of experts be used to assess research but that these experts may use performance indicators to inform their judgement.
The new RAE will be every six years, starting in 2007, and should be updated with light-touch monitoring to pick up cases where a department has closed, for example, three years after the exercise.
The exercise would start with institution-level assessments of research, undertaken in 2005, covering research strategy, development of researchers, equal opportunities and how research is disseminated beyond the peer group that created it. An institution failing in any one of the competencies would be allowed to enter the next RAE but would not receive funding until it had demonstrated a satisfactory performance.
The fourth and most contentious recommendation outlines how the intensiveness of the assessment should be proportional to the likely benefit. This is taken a stage further by the proposal that "the least research-intensive institutions should be considered separately from the remainder of the higher education sector". How research and scholarship in these institutions would be judged is left open.
The less competitive work in the rest of the institutions would be assessed by proxy measures, such as income from research council grants. This would be called the "research capacity assessment". Only the most competitive work would be assessed using expert review similar to the old RAE, called the "research quality assessment".
Discipline-specific performance indicators would be developed by the funding councils, the research councils and subject committees to inform institutions of their position relative to others. Performance against these indicators would be calculated in 2006.
Those institutions going into the research quality assessment would receive a "quality profile" rather than the present single grade. Experts would assess a department's work and award each piece between one and three stars.
These stars would be added up across the sub-panels and an overall score given. The only way of comparing departments would be to divide this score by the number of staff working in them. Star ratings would not be given to named individuals.
To ensure consistency between different subjects, panels would be given guidelines on expected proportions of one-star, two-star and three-star ratings that were the same for each unit of assessment. If they awarded grades that were more or less generous than anticipated in the guidelines, these grades would have to be confirmed through moderation.
The number of units of assessment would be slashed to between 20 and 25, supported by about 60 sub-panels. Each of these panels and sub-panels would be supported by colleges of assessors.
In a further attempt to ensure consistency between subjects, each panel would include a moderator who would sit on each sub-panel.
The moderators of adjacent panels would meet in super-panels, whose role would be to ensure consistency between panels.
These super-panels would be chaired by senior moderators with extensive experience in research. Each panel would include non-UK based researchers with experience of the UK research system.
The rule that each researcher may submit only up to four items of research output would be abolished although an average would have to be taken to prevent the unfair accumulation of stars. Research quality assessment panels would define their own limits on research outputs.
Each panel would have measures to guarantee that practice-based and applicable research were assessed according to criteria that reflect excellence.
Rules are proposed to discourage game-playing. For example, where an institution submits to research quality assessment in a sub-unit of assessment, all staff in that sub-unit should become ineligible for the research capacity assessment, even if they are not included in the research quality assessment submission.
Where an institution submits a sub-unit of assessment for the research quality assessment, no fewer than 80 per cent of its qualified staff contracted to undertake research must be submitted. All staff eligible to apply for grants from the research councils should be eligible for submission to the research quality assessment.
Interdisciplinary and team work would be recognised through the funding councils establishing a way of submitting group work. In light of the suggestion to remove a third of institutions from research assessment, the report recommends that the funding councils consider measures to make joint submissions more straightforward.
Finally, each panel would consider a research strategy statement outlining the institution's plans for research at unit level.
* Only most competitive work peer reviewed
* New RAE every six years
* Intensiveness of assessment proportional to likely benefit
* Less competitive work assessed by proxy measures
* Least research-intensive institutions to be considered separately
* Discipline performance indicators to be developed
* Institutions to receive quality profile
* Department profiles to be based on star ratings
* Limits on number and size of research abolished
* Super-panels to ensure consistency between subjects