At least one in three research-intensive universities in North America examined by a study leaned on the journal impact factor of periodicals that academics had published in when making decisions on promotion and tenure, but the true proportion may be much higher.
The study, believed to be the first to examine the use of the journal impact factor in academic performance reviews, warns that there is an “undue reliance” on the controversial metric, which represents the number of times articles in particular publications have been cited in recent years.
Journal impact factors are being used too often “to evaluate the quality and significance of research, despite the numerous warnings against such use”, says the study, published on PeerJ Preprints.
For the study, researchers from four countries collected and analysed review, promotion and tenure policies from 129 universities in the US and Canada.
Among the documents from 57 research-intensive institutions considered by the study, 23 (40 per cent) referred to journal impact factors, with 19 of these mentions (83 per cent of the subtotal) being supportive. Only three of the mentions expressed caution about use of journal impact factors.
Of the documents that did refer to journal impact factors, 14 associated the metric with research quality, while eight tied it to impact and a further five referred to prestige or reputation.
The overall results, including large numbers of universities that offer few doctoral degrees, found that 23 per cent of review, promotion and tenure policies mentioned the journal impact factor, with 87 per cent of these mentions being supportive.
Juan Pablo Alperin, assistant professor in publishing studies at Canada’s Simon Fraser University, and one of the authors of the study, said that the “overall number [of mentions] was lower than we expected”.
However, since the researchers only counted explicit mentions of journal impact factors, “we think we might be underestimating its presence”, Dr Alperin said.
References to “high-ranking” and “top-tier” journals in some of the documents mean that, even if policies do not reference impact factors, the JIF might still be used “in the evaluations at [these] institutions in a less formal way”, he said.
Dr Alperin said that impact factors were “an imperfect measure of anything happening at the article level” – an assertion that has been backed up by several academic studies.
“An article will have the same quality regardless of where it is published,” Dr Alperin said.
However, impact factors remain influential.
Stephen Curry, professor of structural biology at Imperial College London, said that it was “worrying” that “so many universities are looking at this metric as part of their assessment processes”.
“Some seem to claim [impact factors are] a measure of quality. That is a very dubious contention and one I would like to see picked apart,” said Professor Curry. “It suggests to me there is still an undue reliance on metrics and a lack of will to do research assessments in a fully robust manner.”
Professor Curry is chair of the Declaration on Research Assessment project, which urges universities to focus on the scientific content of academics’ output in decisions on hiring, promotion and tenure, not where it is published.
Dr Alperin said citation metrics “only capture one aspect of the work we do in academia”.
“We actually want faculty to do a wide range of work, including work that is shared publicly, and that engages with the public, but citations only reward the circulation of research among ourselves,” he said.