Proposals for a metrics-based system to replace the research assessment exercise are contradictory and lack evidence, according to the Higher Education Policy Institute, writes Anthea Lipsett.
The institute attacked the Government's consultation, saying that it gave no basis for the policy decisions that it made and did not discuss the likely effects that the proposals would have on the behaviour of individuals and institutions. This was a "serious omission", Hepi said.
Another "major flaw" was the lack of evidence for any of the "savings of time and effort" that Alan Johnson, the Higher Education Minister, mentions in the consultation document. There was also no analysis of the likely cost of a metrics-based system.
Using success rates in securing research council grants as a metric could result in more universities seeking to apply for grants. This in turn, Hepi said, could mean many more carrying out expensive internal peer reviews to improve their chances of success.
It added that a metrics-based system could also force universities to spend money on winning grants rather than on building strategic long-term, unfashionable or speculative research.
Models B and D depend on using metrics to offer quality assessments at subject level, which the consultation document explicitly rules out. The proposals, Hepi said, also conflict with government ambitions to create a more sustainable research base.
Another problem pointed out by Hepi is that a metrics system would increase competitive pressure and make it easier to identify individuals who are successful at winning.
"It would be very surprising if vice-chancellors do not give researchers who bring in a lot of 'metrics points' more research time and resources than those who do not," Hepi said.
Bahram Bekhradnia, director of Hepi, said: "It's a very insubstantial document that doesn't even begin to address the behavioural and incidental effects of metrics." Metrics should inform peer review, not replace it, he said.
Meanwhile, a new Hepi report on the strength of the UK research base says that the majority of UK research papers are below the world average.
Jonathan Adams, director of research company Evidence, sheds light on the UK's average research performance as shown by single measures, such as citations per paper or citation impact.
Dr Adams's report advocates using a wider range of indicators to assess research performance. His methodology shows that although the UK retains its standing, coming second only to the US in terms of citations, performance is patchy and very good research boosts the average score.
"If you unpack (citation data) a bit, the picture looks rather different.
It's not that the UK is worse, just that it's different, but it would be the same for everybody using this methodology," he said.
His methodology could be used after the 2008 RAE to track changes in research performance.