Fresh concerns have been raised about weaknesses in the peer review system that underpins the publication process for academic research across the world.
The head of the UK's research integrity office this week suggested that "more and better education and training" was needed for peer reviewers after the editor of a biology journal conceded that a "misstep" in the peer review process led him to publish an article containing both creationist argument and plagiarism.
The withdrawal of the article by the respected journal Proteomics also came as a new study suggests that reviewers may be overstretched and unable to give proper scrutiny to the work they review.
The controversial paper published in Proteomics, "Mitochondria, the missing link between body and soul: proteomic perspective evidence", referred to "a single common fingerprint initiated by a mighty creator". As Times Higher Education reported last week, its publication caused a furore among scientists. The publisher, Wiley Interscience, has issued a statement saying that the article, by two scientists at Korea's Inje University, has been withdrawn on the grounds that "it contains apparently plagiarised passages from several previously published articles".
Michael Dunn, the Proteomics editor-in-chief, added that he was aware of the interest and controversy the article had caused "both with respect to the issue of plagiarism, as well as the controversial viewpoints expressed by the authors".
He said: "Clearly human error has caused a misstep in the normally rigorous peer review process that is standard practice for Proteomics and should prevent such issues arising."
The journal has not revealed the name of the reviewer or reviewers responsible for approving the paper.
Bloggers have called the publication of the paper an "obscene failure of peer review" and suggested that the Proteomics editors were "asleep at the wheel".
Andy Stainthorpe, head of the UK Research Integrity Office, said: "Peer review is not a perfect system and may not identify all occurrences of plagiarism, fabrication, falsification or inappropriate manipulation of data. The system might be improved through more and better education and training in both the methods for carrying out, and writing for, research, and in the practice and management of peer review."
Harvey Marcovitch, chair of the Committee of Publication Ethics, said peer reviewers could not be expected to spot plagiarism without software, but the publication of unscientific "gobbledegook" represented a more serious failure.
"It's a failure of the peer reviewers and of the editor - you don't publish something just because the peer reviewers approve of it."
Dr Marcovitch added that the peer-review system was "by no means foolproof, but is the best we've got".
"What matters in the end is that (a paper is) retracted if it is incorrect. Unfortunately editors over the years have been very reluctant to retract, partly through fear of litigation and partly because of a false belief that you should only do so if all the authors agree."
An international survey of 3,040 academics, carried out by Mark Ware Consulting on behalf of the Publishing Research Consortium, found that 79 per cent of reviews are done by just 44 per cent of reviewers.
This group said they were carrying out an average of 14 reviews a year, compared with their preferred maximum of 13. Authors reported that the peer review process took 80 days on average.
The survey also scrutinised peer review methods. Single-blind review, where reviewers know the identity of authors but not vice versa, is the most commonly used method, but only a quarter of respondents backed this approach.
More than half the respondents favoured double-blind review, where the names of reviewers and authors are hidden from each other.