Academics’ desire to be judged on the basis of their publication in high-impact journals indicates their lack of faith in peer review panels’ ability to distinguish genuine scientific excellence, a report suggests.
The report, Evidence for Excellence: Has the Signal Overtaken the Substance?, written by Jonathan Adams, chief scientist at Digital Science, and Karen Gurney, a consultant to the firm, analyses trends in the outputs submitted to the past three research assessment exercises.
It finds that the proportion of submissions made up by journal articles has increased significantly, from 62 per cent in 1996 to 75 per cent in 2008. This reflects significant declines in the number of monographs submitted by social scientists and in the number of conference papers submitted by engineers – although books and chapters remain popular in the arts and humanities. Rather than a “massive cultural shift”, the report says this “looks much more like a change in behaviour, not in what was being written but in what was being offered for assessment”.
The report suggests the reason is that journal impact factors provide academics with a simple and widely used way to signal the quality of papers of theirs that have appeared in those journals, whereas “similar databases are still only superficial for conference proceedings and books”.
“Perhaps the change in behaviour…is evidence that numbers inexorably overcome real cultural preferences,” the report, which was published on 9 June, says.
An analysis of the 2008 RAE reveals that 14 high-impact journals each accounted for more than 500 of the nearly 81,000 articles submitted. Three journals with particularly high impact factors – Nature, The Lancet and Science – accounted for many more RAE submissions than the number of eligible outputs they contained, indicating that many co-authored papers had been submitted by more than one institution. One Nature paper had been submitted by all 12 UK institutions that had authors who had contributed to it.
Although 500 eligible UK Nature papers were not submitted to the RAE, 418 of the total 1,510 submissions of outputs from the journal were not UK-authored papers. These were either papers by academics recruited from abroad or “ephemera”, such as letters or editorials, which were often not cited at all.
“It might seem to make obvious sense for researchers to choose to submit papers from journals that had particularly high impact,” the report says. But it notes that the review panels – including those for the 2014 research excellence framework, whose results will be known in December – are barred from considering journal impact factors and are supposed to base their assessments of outputs on reading them.
Impact factors are an average of the number of citations that papers in the journal receive over a certain period. But the report points out that the figure is skewed by small numbers of highly cited papers, meaning that most papers garner fewer citations. Yet some academics submitted low-cited papers in high-impact journals in preference to more highly cited papers published in lower ranking journals.
“The real substance of what academics thought was the best marker of research excellence was displaced for review purposes by outputs that gave the simplest signal of achievement,” the report says.
“The kudos of the well-cited journal was a marketing signal outweighing the individual item.”