Medical research papers based on animal studies should include an independent confirmation of the author’s main hypothesis, according to two neuroscientists who claim that current experiments to confirm theories are too lax.
Jeffrey S. Mogil, director of the Alan Edwards Centre for Research on Pain at McGill University, and Malcolm Macleod, professor of neurology and translational neuroscience at the University of Edinburgh, argue that this “big shift” in how scientists produce papers would be “more formal and rigorous than the typical preclinical testing conducted in academic labs” and would “adopt many practices of a clinical trial”.
In a comment article published in Nature on animal studies papers that propose new ways to treat disease, the academics claim that the proposal would mean that fewer people would “waste resources following up on weak papers”, which would get drugs to market more quickly.
Last month, Times Higher Education reported that two out of five experiments testing the reliability of key cancer research findings failed to support earlier findings.
Professor Mogil and Professor Macleod say that currently reforms to improve reproducibility tend to apply equally across a long series of experiments, which means that early experiments to generate a hypothesis are too rigid, while later experiments to confirm hypotheses are too lax.
The result is that a large fraction of published preclinical work is unreliable, according to the authors, who added that “the integrity of biomedical research could benefit from such radical thinking”.
“Instead of striving to convince reviewers and editors to publish a paper in prestigious outlets, [researchers] would be questioning whether their hypotheses could stand up in a large, confirmatory animal study,” they write. “Such a trial would allow much more flexibility in earlier hypothesis-generating experiments, which would be published in the same paper as the confirmatory study.
“If the idea catches on, there will be fewer high-profile papers hailing new therapeutic strategies, but much more confidence in their conclusions.”
Professor Mogil and Professor Macleod propose three features of the confirmatory study: it would adhere to the “highest levels of rigour” in design, analysis and reporting; it would be held to a “higher threshold of statistical significance”; and it would be performed by an independent laboratory or consortium. Sample sizes for such confirmatory studies would need to increase “around sixfold”, so that “a positive statistical test means the hypothesis is very likely to be correct”, they say.
They admit that it is “not practical” to expect the academic community to “change direction in step and as one” and call for journals to “make space” for papers that include confirmatory experiments along with exploratory work, and eventually prioritise such studies or even make confirmatory experiments a requirement.
“Tenure and faculty assessment committees should find ways to credit such work. Funders could develop schemes to pilot this approach, and those who run clinical trials should demand greater confidence in the premise underlying human studies,” they add.