Slay peer review ‘sacred cow’, says former BMJ chief

Peer review is a sacred cow that is ready to be slain, a former editor-in-chief of the British Medical Journal has said

April 21, 2015

Source: Katoosha/Shutterstock.com

Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud.

Referring to John Ioannidis’ famous 2005 paper “Why most published research findings are false”, Dr Smith said “most of what is published in journals is just plain wrong or nonsense”. He added that an experiment carried out during his time at the BMJ had seen eight errors introduced into a 600-word paper that was sent out to 300 reviewers.

“No one found more than five [errors]; the median was two and 20 per cent didn’t spot any,” he said. “If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit.”

He added that peer review was too slow, expensive and burdensome on reviewers’ time. It was also biased against innovative papers and was open to abuse by the unscrupulous. He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine “what matters and what doesn’t”.

“That is the real peer review: not all these silly processes that go on before and immediately after publication,” he said.

Opposing him, Georgina Mace, professor of biodiversity and ecosystems at University College London, conceded that peer review was “under pressure” due to constraints on reviewers’ time and the use of publications to assess researchers and funding proposals. But she said there was no evidence about the lack of efficacy of peer review because there was no “counterfactual against which to tension” it.

“It is no good just finding particular instances where peer review has failed because I can point you to specific instances where peer review has been very successful,” she said.

She feared that abandoning peer review would make scientific literature no more reliable than the blogosphere, consisting of an unnavigable mass of articles, most of which were “wrong or misleading”.

It seemed to her that the “limiting factor” on effective peer review was the availability of good reviewers, and more attention needed to be paid to increasing the supply. She suggested that the problem of the non-reproducibility of many papers was much more common in biomedicine.

But Dr Smith said biomedical researchers were only more outspoken about the problems because “we are the people who have gathered the evidence” of it. He said peer review persists due to “huge vested interests”, and admitted that scrapping peer review was “just too bold a step” for a journal editor currently to take.

“But that doesn’t mean [doing so would be] wrong…It is time to slaughter the sacred cow,” he said.

Meanwhile, science publisher Jan Velterop said peer review should be carried out entirely by the academy, with publishers limited to producing technically perfect, machine readable papers for a “much, much lower” fee than typical open access charges.

He said that for many papers, it would be most appropriate for authors to invite experts to seek the endorsement of a number of experts – on the basis of which they would be submitted to journals.

When he had approached publishers about the idea they had typically accused him of “asking us to find the quickest way to the slaughterhouse”. But the ScienceOpen platform had just agreed to offer publication by endorsement as an option, for a fee to be determined following consultation. 

paul.jump@tesglobal.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Reader's comments (4)

Thanks Paul, as co-founder of ScienceOpen (there seems to be a typo in your post) and as researcher who also worked more than a decade in publishing industry I am fascinated about Jan Velterop's concept. We are happy to let it fly!
In my own specialist field, I've had some very good suggestions from reviewers, especially from top end specialist journal (like J Gen Physiol and J Physiol in my area). Glamour journals have not done as well. Nevertheless, in general, I have to agree with Richard Smith. It's become quite obvious that any paper, however bad, can now be published in a journal that claims to be peer-reviewed. As a badge of respectability. "peer-reviewed" now means nothing whatsoever. There will never be enough competent reviewers for the vast number of papers that are being published now. Georgina Mace says "abandoning peer review would make scientific literature no more reliable than the blogosphere". But that is already the case. You have to read the paper to find out if it's any good. All papers should first appear on archive sites, where feedback can be gathered before eventual publication. And when published, all papers should have open comments at the end. It's already happening with a rapidly increasing number of journals (like eLife and Royal Society Open Science). That would mean that it would be essential that people judging you for jobs and promotion would have to read the papers rather than relying on near-useless surrogates like impact factors and citations. Of course the amount of rubbish would be large, but no larger that it already is. And above all, it would make publishing very much cheaper. There would be no more huge charges for open access.
Peer review is as good as our peerage system is :) As it is the dominant filter that precedes scientific publication at this time, most of us who have tried to communicate our findings and research outcomes will know by experience that reviews can be insightful, helpful, right, or the opposite. For a few years I have seen the process both ways, serving as an Editor at PLOS ONE - each time I receive a good review, I rejoice and use it to communicate with the authors effectively; on other occasions I struggle to find a balance between being fair to the authors, my duty as Editor to the Journal and the scientific community, etc. My first point is - and PLOS ONE exemplifies this very well - that who is the Editor and who are the Peer Reviewer matters: generalising for reasons of debate is dangerous. My second point, follows from the above, and relates to alternatives. One issue that I feel we see more of and which is corrupting science, is the venues where you can pay-and-publish (with a varying degree of "peer review"). Scrapping the requirement for peer review, is likely to make things worse in this respect. The need to read the papers should never go away (in this I am concerned with the intriguing suggestion by Douglas Kell that computers may do the reading and provide useful summaries for us in the future). However, equally relevant is the question of where one browses for titles and abstracts from where to chose further reading - and how one choses from competing calls. I would suggest that good publications venues, offer quality to their readership. It is a sad matter that so much of the publication industry is dominated by financial issues.
Reviewers are human too and sometimes susceptible to prejudice and ignorance. Would it be possible for the model that Wiki uses to be replicated in academic publishing? This means that research outputs will be open to a wider audience and but scrutinised in a more democratic way whilst maintaining academic rigour.

Sponsored