One of the most important factors determining a university department's performance in the research assessment exercise was whether it was represented on the RAE judging panel, a study has found.
The study, which is published in the current issue of the journal Political Studies Review, found that the presence of a member of department among the judges was the second most influential factor determining the results. The single most important factor was the number of times the work of a department's researchers was cited by their peers.
"Metrics or peer review? Evaluating the 2001 UK research assessment exercise in political science", by Linda Butler and Ian McAllister of the Australian National University (ANU), Canberra, examines the political science panel results from the 2001 RAE - the most recent exercise before the final RAE in December 2008.
It analysed more than 4,400 outputs submitted to the 2001 panel to test whether citations could be used to replace peer review in future exercises.
The study concludes that citation counts could be a suitable replacement for peer review because they were the single most important predictor of a department's standing in the 2001 RAE.
But it also found "substantial ... indirect biases" in the exercise's peer-review process.
"A department that was awarded a 4 ranking in the RAE (which used a seven-point scale in 2001), but did not have a staff member on the panel, could have expected to receive close to a 5 ranking if one of its members had been (a panellist)," the study says.
Its conclusions are adjusted for citations, department size and research culture.
"Inevitably, RAE panel membership gives an intimate knowledge of the process of evaluation, which will convey indirect but tangible benefits to the person's own department," the authors say.
"Committee members help to shape the rules for evaluation, but, more importantly, they understand them and how they can be used to present a department in the most favourable light."
One senior professor of politics, who asked not to be named, said the biases in 2008 were "even more marked" than they had been in the 2001 exercise.
"The politics departments that had a representative on the panel in 2001 but not in 2008 all dropped in the ranking," he said. "The departments with members on the 2008 panel all did well."
He said that although it was true that panel members were not allowed to evaluate work submitted by their own departments, it was the "bonding" between judges at meetings and on awaydays that caused problems.
"They (want to) avoid embarrassing colleagues ... leading to inflation of the grades awarded to departments with panel members at the expense of better departments without (representation)."
He added that he had no confidence that the results would be the same if the submissions were judged by other panel members drawn from different universities.
The ANU researchers conclude that peer review is not the most appropriate model for research assessment.
A "metrics-based model" will eliminate indirect biases and save academics time, they argue.
The RAE, which was largely based on peer review, will be replaced by the research excellence framework, which will base its judgments more heavily on numerical indicators, such as citations.