Journals’ statistics rules ‘help tackle reproducibility crisis’

Study outlines how publishers’ guidelines led to adoption of positive research practices

五月 29, 2017
Statistics on digital screen

Academic journals may be able to help tackle science’s reproducibility crisis by issuing guidelines on the use of statistics in papers submitted for publication, researchers say.

A study found that introducing recommendations on the responsible use of statistics in psychology journals was followed by an increase in authors reporting their findings using certain good research practices.

Such improvements could be key to tackling the crisis of confidence in several fields of science where researchers are struggling to replicate the findings of other academics. Many believe that the use of questionable research practices is to blame for this problem, which is seen most prominently in psychology and the life sciences.

Scientists can bias their evidence towards a certain conclusion by omitting unfavourable data from an analysis or using a summary of data instead of the original measures, for example. But in recent years there has been a growing movement to counter these bad habits.

In 2014 the journal Psychological Science introduced specific guidelines for researchers, asking them not to describe their results as “statistically significant” or not and instead to report a statistical measure that describes their confidence in the findings. It also asks them to outline why certain data are included or excluded from analysis and awards badges for making the full datasets and materials used in the research openly available, among other things.

The study authors, led by David Giofre, senior lecturer in the School of Natural Sciences and Psychology at Liverpool John Moores University, compared what effect these guidelines had on the reporting of statistics in articles in this journal to that of an equivalent periodical, the Journal of Experimental Psychology: General. This journal refers authors to recommendations on statistics published by the American Psychological Association rather than having its own.

“Our findings suggest that journal specific submission guidelines may encourage desirable changes in authors’ practices,” write Dr Giofre and his colleagues in their paper, published in Plos One.

Over the three years studied, the researchers found that the use of confidence intervals increased by 41 percentage points in papers published in Psychological Science. The number of papers describing their criteria for excluding data from analysis increased by 53 percentage points, and the number making full datasets available increased by 36 percentage points.

The use of these practices also increased in the Journal of Experimental Psychology: General, but not to the same extent. “[T]here were larger and broader changes in [Psychological Science],” the authors say.

Although the authors admit that they cannot say for sure whether the guidelines actually changed the way researchers reported their results, they say that they may be useful in changing behaviour.

Brian Nosek, professor of psychology at the University of Virginia, said that most researchers want their work to be transparent and reproducible.

“The problem is that the culture does not provide incentives for it,” he said. “If transparency and reproducibility are not part of the journals’ requirements for publication, then there is little reason for researchers to do it.”

However, Professor Nosek added that funders and institutions also have a role to play in fostering good research practices.

Daniele Fanelli, a senior research scientist at Stanford University, agreed that journals have an “important role” to play in guaranteeing the quality of publications but that the effect of regulations should not be “overestimated”.

holly.else@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.