If you write an article showing that error-ridden research is more likely to be retracted, it is probably a good idea to double-check it for mistakes.
Unfortunately, a group of researchers at Imperial College London did not proofread their paper quite thoroughly enough.
Almost as soon as the article, “Frequency of discrepancies in retracted clinical trial reports versus unretracted reports: blinded case-control study”, was published, three mistakes were discovered by a keen-eyed academic, who wondered whether they might have been inserted deliberately to test readers’ alertness.
Graham Cole, a British Heart Foundation clinical research fellow at Imperial, admitted that being alerted to the rounding, labelling and graph errors was “a bit embarrassing”.
“When I was writing this paper, I was conscious of being in a glasshouse throwing stones,” he said.
But Dr Cole added: “In some ways it was almost fortuitous because it allowed us to demonstrate what we think…is correct scientific practice.” He said that the authors had immediately published their raw data alongside the paper and corrected the errors.
The study selected 50 retracted articles on clinical trial results and, as a control, another 50 that appeared in the same journals. Three scientists, blinded to which had been retracted, scoured the papers for mathematical and logical contradictions.
In total, they found 479 discrepancies: 348 appeared in the retracted reports, and 131 in those that had not been retracted.
Dr Cole stressed that the results did not mean that papers with discrepancies were necessarily more deeply flawed.
But, referring to the unreleased data on which the experiments were based, he said: “If the things you can check aren’t right, should you trust the things you can’t check?”
The discovery that papers with lots of mistakes are more likely to be retracted might not sound surprising, but less than one in five was withdrawn for actual errors. Nearly half were retracted for misconduct, and 14 per cent for plagiarism.
Raw data should be published alongside articles, Dr Cole said. In the past, when he had contacted other authors to ask for their data, he had found them “very unwelcoming” to the sharing of data, even over basic questions such as how many patients a study had. “When that happens, that doesn’t enthuse trust,” he added.
This reluctance may come from fears that full disclosure will allow other scientists to use the data to reach different conclusions or to find something more significant, he suggested. Open release of data also “greatly” increases the risk that problems will be discovered in the findings, he added.
The paper, published at the end of last month in the British Medical Journal (BMJ), suggests that journals should provide an online forum for readers to quickly raise discrepancies.