Research impact metrics still exert an “undue influence” on hiring and grant approval panels, says a new study which found most scientists rely on journal prestige when assessing an applicant’s work.
Academics have long criticised the use of journal impact factors (JIFs), which measure a publication’s historic citation rates, when assessing the relative strengths of an individual researcher.
But a survey of almost 500 biology researchers who have served on either grant review committees or university hiring and promotion panels in the past two years found most respondents still used this and other “extrinsic proxies” to assess not just the strength of an applicant’s publication record but the trustworthiness of their outputs.
In the study, published by the open access title PeerJ, 57 per cent of respondents say they normally use at least one of three indicators – journal reputation, lab reputation or JIF – to evaluate whether or not “research is credible”.
Journal reputation was the most trusted “extrinsic proxy” when evaluating research credibility (it was used by 48 per cent of respondents) and 43 per cent used it to decide whether research was trustworthy or reliable, says the study, written by PLOS editors.
Some 19 per cent of scientists use JIFs to evaluate whether a research paper is credible while 15 per cent use this metric to decide if the research was trustworthy or reliable, says the report, which noted the use of JIFs has been “found to be [a] poor predictor for the quality of peer review of an individual manuscript”.
Banning the use of JIFs in funding, appointment and promotion considerations is one of the key recommendations of the 2012 San Francisco Declaration on Research Assessment (Dora) which has been signed by hundreds of institutions around the world.
With 90 per cent of respondents stating evaluation of research outputs was important for panel decisions but fewer than half saying they were satisfied with the range of metrics at their disposal, the study claims “there is a large area of opportunity to provide [new] signals of credibility and trustworthiness”.
These might cover “specific aspects of rigor, integrity, and transparency”, it says.
For transparency, panellists might be encouraged to consider whether open science practices have been followed, such as whether a paper has shared datasets, code or protocols in a published paper or in a preprint.
Improving “signals” to demonstrate these qualities could find favour among scientists because the study’s participants “consider credibility and trustworthiness very important and are dissatisfied with the current means of assessing these qualities at their disposal”, suggests the paper.
Author Iain Hrynaszkiewicz, director of open research solutions at PLOS, said the use of JIFs was “understandable given the lack of alternative metrics to assess the intrinsic qualities of research…despite these being imperfect indicators of credibility for individual research outputs”.
“There was a clear and welcome desire among research assessors for information about the intrinsic qualities of research. Researchers were especially troubled by difficulties in assessing research integrity, for example in detecting signs of fabrication, falsification, or plagiarism,” he told Times Higher Education.
“We think this means that there are lots of opportunities to develop approaches – whether enhanced guidance and expectations, or better signals, metrics, and tools – to improve the assessment process.”
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber?








