One danger with any measurement is that the activity of measurement influences the activity being measured. One potential source of such influence in the proposed Higher Education Funding Council for England research excellence framework ("Tough new hurdle for top researchers", November 23) is the proposal that "all papers from the selected academic will be included".
The danger is that an attempt will be made to measure quality as some kind of (normalised) average of the number of citations for an academic, department or subject area, with the consequent influence on behaviour being that academics are discouraged from writing minor or risky papers that might bring this average down. In particular, this could have an impact on research students, in that they may be discouraged from writing papers with their supervisors about minor developments, and on early- career academics, who might be advised to hold off from writing papers until they have generated material to write a small number of big-hitting papers with potential for high citation counts.
While pressure against overpublication and "salami-slicing" of research results may be valuable, there is a danger that a badly designed research- quality metric could go too far the other way and produce too much anti- publication pressure. A potential solution is for Hefce to adopt a method of converting citation lists into quality measures that reward high- citation papers yet is not brought down by low-citation papers. An example of such a measure would be the largest mean citation count for any subset of that academic's papers. This would reward peer-acknowledged impact while still encouraging adventurous and minor-but-worthy advances to be put out in the public domain.
Colin Johnson, Senior lecturer, Computing Laboratory, University of Kent at Canterbury.