Add input to make metrics count

November 9, 2007

Academics must help shape RAE replacement to ensure measures, especially bibliometrics, foster trust and quality, says Eric Thomas. In December 2006, the Government announced that a new framework for assessing and funding university research would be introduced after the completion of the next research assessment exercise in 2008. The sector has broadly welcomed the key features of the announcement, which includes the replacement of the peer review-based RAE by a new UK-wide indicator of research quality based on metrics. The change offers the prospect of a smaller administrative burden and less negative impact on universities' planning processes. It will be phased in over time, and the outcomes of the RAE in 2008 are likely to inform funding allocations for a considerable time.

The new framework will produce an overall "rating" or "profile" of research quality for broad subject groups at each higher education institution. It is widely expected that the ratings will initially be derived from bibliometric-based indicators - using counts of journal articles and their citations - in science, technology, engineering and mathematics subjects. This will not apply to the arts, humanities and social sciences, where an RAE-style peer review will still be needed.

The new indicators will need to be linked to other metrics on research funding and on research postgraduate training. In a final stage, the various indices will need to be integrated into an algorithm that drives the allocation of funds to institutions. This is the most important point in the process when the funding councils decide how selective the funding allocations should be and how much they wish to invest in developing new research capacity.

The new quality indicators would ideally be capable of not only informing funding but also providing benchmarking information for higher education institutions and stakeholders. They are also expected to be cost-effective to produce and should reduce the current assessment burden on institutions.

The Higher Education Funding Council for England is working on the development of the new arrangements. Its work includes an assessment of how far bibliometric techniques can be used to produce appropriate indicators of research quality and to evaluate options for a methodology to produce and use these indicators. In preparation for the funding council's consultation, which will be launched later this month, Universities UK has just published a report exploring some of the issues that arise when using metrics in the research assessment process. The report, which has been produced by Evidence Ltd, does not suggest a preferred approach but identifies a number of issues for consideration.

Article citation counts - its accuracy and appropriateness are a critical factor. There are no simple or unique answers. It is acknowledged that the world's best database, produced by Thomson Scientific, necessarily represents only a proportion of the global literature. This means that this database accounts for only part of the citations to and from the catalogued research articles, and coverage is better in science than in engineering. Applied research and innovation are not well covered.

The problems of obtaining accurate citation counts may be increasing as online publication diversifies. There are also technical issues concerning fractional citation assignment to multiple authors, relative value of citations from different sources and the significance of self-citation. The time frame for assessment and for citation counting relative to the assessment will also affect the outcomes and may need to be adjusted for different subject groups.

The definition of the broad subject groups and the assignment of staff and activity to them will need careful consideration. Although the RAE subject groups might appear sensibly to follow traditional faculty structures, this is no longer the unique framework for research activity.

Differences between subjects - both in terms of funding levels and publishing practice - mean that no uniform approach to data management is likely to prove acceptable if all subjects are to be treated equitably.

There will need to be adjustments of normalisation and weighting factors and of weighting between bibliometrics and other indicators. There is also a challenge to be addressed in the management of interdisciplinary research, where metrics may have less to offer.

The report by Evidence Ltd will help universities to contribute to the development of the new framework in the context of their support for replacing the RAE after 2008. The objective of any change in the assessment method should be to sustain recent improvements in UK research performance. It is essential that the sector fully engage with this important consultation with the aim of ensuring that the new arrangements maintain the excellence of the UK research base. To do this, the metrics system will need not only to be technically correct but also to be acceptable to and inspire confidence among the researchers whose performance is assessed. The sector's constructive input will help the benefits of a new system to be fully realised.

Eric Thomas is chair of the Universities UK research policy committee and vice-chancellor of Bristol University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored