The scientific method works on the principle that an incorrect or fraudulent paper will eventually be weeded out when the shaky or rotten foundations it has provided for future research are exposed.
But with strong pressures to publish and incentives to repeat another research group's experiments being slim to none, one initiative is aiming to sharpen up the process by providing an easy route for researchers to validate their results.
Science Exchange, a Silicon Valley company that calls itself a "marketplace for scientific services", has launched the Reproducibility Initiative in collaboration with the open-access journal PLoS One.
The firm estimates that 70 per cent of published biomedical research cannot be reproduced successfully, a factor it says has helped to retard progress in the development of new drugs in recent years.
The initiative works by connecting researchers with anonymous research teams, either at core facilities within universities that neither write grants nor publish themselves, or contract research organisations. These then conduct independent, blind validations of the researchers' work on a fee-for-service basis.
Costs are estimated at around 10 per cent of that of the original research, with Science Exchange charging a 5 per cent service fee.
According to Elizabeth Iorns, Science Exchange's co-founder and a former research scientist, the service gives original researchers both the opportunity for further publication when the replicated results appear in a dedicated journal, PLoS Reproducibility Collection, and a certificate of validation that bolsters their reputation.
"At the moment there is a problem of reproducibility of academic research, for a variety of reasons, but there isn't really much of a quality control system in place," she said. "Most people would say that repeating and building upon scientific work is essential, and yet there's no mechanism for doing that, there's no funding for it and it's very difficult to publish."
Learning from mistakes
Under the scheme, if a lab fails to validate results, a second lab will try to replicate the work. If it fails, the initiative will then give the authors the chance to publish in the PLoS Reproducibility Collection and explain the failures.
"What I'm trying to get to is a culture where people publish and talk about (research that can't be reproduced), instead of hiding it away or retracting it," Dr Iorns said. "It becomes not punitive, but about the open progress of science. People want the story that science is perfect and everything directly matches the data, but it's not like that."
The project comes on the back of what appears to be a growing number of bioscience companies failing to reproduce research that they are seeking to translate into commercial products.
A 2011 correspondence paper published in Nature Reviews, for example, found that in-house experiments at Bayer HealthCare in Germany could not repeat "exciting published data" in 65 per cent of 67 projects, which later led to the termination of those programmes.
Writing in a Nature editorial earlier this year, Glenn Begley, former vice-president of haematology and oncology research at the US biopharmaceutical firm Amgen, said that over a decade the company had been unable to reproduce findings in 47 of 53 so-called "landmark publications" in cancer research, findings that had been cited an average of 200 times.
While the value of Science Exchange's services for scientists hoping to sell their research to pharmaceutical firms is clear, Dr Iorns also hopes that the scheme will be taken up across academic research. Although reproducing research for a fraction of its original cost is not possible in many fields outside biomedicine, nor in complex work that uses non-traditional techniques, these are not the areas where the bulk of the spending goes, she said.
"I would love to see funding agencies really start to address this problem. So much money is spent on academic research that if 70 per cent of that is not reproducible, that's an enormous waste of publicly funded money."
According to James Parry, chief executive of the UK Research Integrity Office, anything that prompts discussion about the pressures that induce some to create false results is beneficial.
"This initiative could make a valuable contribution, but I would be wary if companies or funders ever made it mandatory. This could have unintended consequences," he said.
The scheme's merits would also need to be judged against its potential to add an additional layer of bureaucracy to research, he added.
"The important thing is looking at why we get research that can't be reproduced, whether it lies in problems of design, execution or analysis - as well as the problems with peer review and the question of whether journals are filtering out negative papers, for example."
According to Brian Nosek, associate professor in the department of psychology at the University of Virginia and a member of the Reproducibility Initiative's advisory board, the system aims to address exactly those issues by shifting incentives away from "getting published" to "getting it right".
"Scientists do want to get it right, but they don't have good incentives to do so," he said.
"It has to become just part of what a scientist thinks she should do because that's what science is. Funders can have a very soft push by providing money, opportunity and reasons one might do that. Journals can do the same by providing openings (for reproducing research)."
Dr Iorns hopes the system will work by allowing researchers who are willing to use it to stand out, rather than focusing on punishment via exposure on websites such as Retraction Watch.
The carrot rather than stick method of encouragement and the element of self-selection mean that the rate of failure would probably be low, she added.
But whether academics will be keen to take part remains to be seen.
"Cost and speed are of course going to be issues," said Stephen Caddick, Vernon professor of organic chemistry and chemical biology and vice-provost for enterprise at University College London.
"It really depends on different disciplines, but I can see that in my own area it could be quite useful if you discover something unexpected. You could almost imagine getting it checked before doing further work on it - that would be valuable."
However, the reputation of the organisation would be vital, he added.
"You would have to have a huge amount of trust in the organisation that's doing it. You're asking people to tell you that this exciting thing you've done is not right, and that would be hugely disappointing."
Professor Caddick also noted that the lack of reproducibility, certainly in his own discipline, could as easily be attributable to difficulties in replicating complex research methodologies as to inaccuracies in the original results, and that a process of repeating experiments should not become the rule.
"If it becomes the standard protocol that it is expected that all experiments have to be independently validated, there could be huge implications for the costs of experimental science - they would be significant," he said.
Despite receiving wide press coverage and boasting a 10-member advisory board of established researchers, many publishers, funders and scientists remain sceptical about the Reproducibility Initiative.
Nature Publishing Group and Rockefeller University Press have said that if the results of papers they publish are checked through the initiative, they will link to the repeated work alongside the originals. However, the latter has also stated that this does not constitute an endorsement owing to the scheme's "various shortcomings".
Whether and to what degree the initiative takes off is likely to depend on it reaching a critical mass of users. According to Dr Iorns, in the three weeks since it was launched, it has had submissions from a handful of research teams. The company is also in discussions with three funders, mainly of disease-specific research foundations, interested in covering costs for any of their researchers who want their work replicated.
But Dr Nosek said that the initiative may spur on other projects with similar goals simply by making scientists and funders think about the issue of reproducibility.
"I hope that a number of labs will try it and that it will inspire new ideas. A real outcome for me would be that this spawns other initiatives like this and eventually shifts some of the norms in science."