You report on a study that found no correlation between manuscript rejection rates and a journal’s impact factor (“High rejection rates by journals ‘pointless’”, News, 28 January). But why would one expect there to be a positive correlation?
Impact factors are a simple arithmetic mean of individual paper citation counts, normalised by the number of papers published. The numerator is hugely sensitive to a small number of very highly cited papers, but for the majority of journals super high-citation papers can be difficult to predict in advance. Rather than trying to minimise the denominator by rejecting lots of papers, a better strategy for maximising an impact factor is to increase the chances of publishing such a paper, which would suggest low rejection rates.
A better index of the quality of a journal’s content would be the percentage of published papers that after, say, three or five years have been cited only once or never. This would focus attention on weeding out poor, erroneous or irrelevant papers, rather than on trying to second-guess which will be the superstar papers that disproportionately boost a journal’s impact factor.
Professor of climate and culture
King’s College London