British Academy report warns against sole use of metrics to assess humanities. Louise Radnofsky reports.
In a report published this week, the British Academy has thrown its weight firmly behind the UK's traditional system of peer review as the best way of controlling research quality.
But the academy has also called for significant reinforcement of the system, including better training and reward for those charged with assessing their colleagues' work, in the face of mounting criticism that peer review is too expensive and time-consuming.
The report, which was based on the findings of a seven-member working group, also sounds a warning against any move to assess research in the humanities and social sciences exclusively on the basis of metrics - measurements of indicators such as the number of times published research is cited by other academics or the number of grants that researchers have won.
"Peer review remains an essential, if imperfect, practice for the humanities and social sciences," the report concludes. The use of metrics poses serious problems for humanities and social science researchers and "should remain an adjunct to the research assessment panel peer-review process rather than a substitute", it says.
The research assessment exercise - which uses peer review to assess the quality of research across the UK to determine how billions of pounds in funding should be distributed - will be scrapped after 2008.
The Government confirmed in December that the quality of research in science and technology subjects will be assessed purely on the basis of metrics, while arts, humanities and social science research will be assessed by a combination of metrics and a new form of "light-touch" peer review. The details of how this will operate are expected later this month.
The British Academy says metrics are unreliable in fields such as the humanities and social sciences where academics publish fewer articles and more monographs and where developments in the fields are often recognised and understood over a much longer period.
Academics might also find that high-quality studies for very specialised journals could be largely unrecognised in any citations index, the report says.
Articles appearing in high-impact journals are not necessarily of superior quality to articles in low-impact journals, said Albert Weale, the panel chair.
"There is no point in using a measure that does not really measure what you intended it to measure," he said.
"The fact that something looks definite doesn't mean that it's reliable," he added
The report says the peer-review system has to be strengthened.
Universities and the RAE do little to value the work of peer reviewers, and a new generation of researchers is not receiving adequate training in competent and ethical reviewing, even as the number of journal submissions and grant applications is growing, the panel found.
It cited some Economic and Social Research Council reviewers who "did not believe that the most talented researchers always engaged in peer-review activities for the research councils because they are under pressure from their host institutions to focus on publishing their own research".
Professor Weale said: "The concern, as we all know, is that the urgent tends to drive out the important, and the visible tends to drive out the invisible."
He said academics should include their peer-review work in their CVs and that appointments and promotions committees should consider such work a "contribution to the academic public good" to be valued alongside teaching and research work.
The panel also recommends that research councils provide formal peer-review training for postgraduate researchers.
A spokeswoman for the ESRC said that the council was still considering the panel's recommendations.
"Although the peer-review system is by no means perfect, it does provide particular benefits," she added.
THE PANEL'S CONCLUSIONS
- There are no better alternatives to peer review
- 'Decentralised diversity' is one of peer review's strengths
- All models should be timely and transparent
- The RAE and institutions should encourage and reward peer review
- Metrics should accommodate 'the special features' of humanities and social science research and cannot automatically substitute for the RAE panel peer review process
- The editorial independence of journal editors must be protected
- Applicability, relevance and novelty are no substitutes for quality
- The full economic cost of peer review cannot be recovered
- Postgraduates, postdoctoral researchers and non-academics should be trained in peer review
- Funds should be reserved for risky, speculative projects
- Strategic and responsive-mode funding must be balanced