Slay peer review ‘sacred cow’, says former BMJ chief

Peer review is a sacred cow that is ready to be slain, a former editor-in-chief of the British Medical Journal has said

April 21, 2015

Source: Katoosha/

Richard Smith, who edited the BMJ between 1991 and 2004, told the Royal Society’s Future of Scholarly Scientific Communication conference on 20 April that there was no evidence that pre-publication peer review improved papers or detected errors or fraud.

Referring to John Ioannidis’ famous 2005 paper “Why most published research findings are false”, Dr Smith said “most of what is published in journals is just plain wrong or nonsense”. He added that an experiment carried out during his time at the BMJ had seen eight errors introduced into a 600-word paper that was sent out to 300 reviewers.

“No one found more than five [errors]; the median was two and 20 per cent didn’t spot any,” he said. “If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit.”

He added that peer review was too slow, expensive and burdensome on reviewers’ time. It was also biased against innovative papers and was open to abuse by the unscrupulous. He said science would be better off if it abandoned pre-publication peer review entirely and left it to online readers to determine “what matters and what doesn’t”.

“That is the real peer review: not all these silly processes that go on before and immediately after publication,” he said.

Opposing him, Georgina Mace, professor of biodiversity and ecosystems at University College London, conceded that peer review was “under pressure” due to constraints on reviewers’ time and the use of publications to assess researchers and funding proposals. But she said there was no evidence about the lack of efficacy of peer review because there was no “counterfactual against which to tension” it.

“It is no good just finding particular instances where peer review has failed because I can point you to specific instances where peer review has been very successful,” she said.

She feared that abandoning peer review would make scientific literature no more reliable than the blogosphere, consisting of an unnavigable mass of articles, most of which were “wrong or misleading”.

It seemed to her that the “limiting factor” on effective peer review was the availability of good reviewers, and more attention needed to be paid to increasing the supply. She suggested that the problem of the non-reproducibility of many papers was much more common in biomedicine.

But Dr Smith said biomedical researchers were only more outspoken about the problems because “we are the people who have gathered the evidence” of it. He said peer review persists due to “huge vested interests”, and admitted that scrapping peer review was “just too bold a step” for a journal editor currently to take.

“But that doesn’t mean [doing so would be] wrong…It is time to slaughter the sacred cow,” he said.

Meanwhile, science publisher Jan Velterop said peer review should be carried out entirely by the academy, with publishers limited to producing technically perfect, machine readable papers for a “much, much lower” fee than typical open access charges.

He said that for many papers, it would be most appropriate for authors to invite experts to seek the endorsement of a number of experts – on the basis of which they would be submitted to journals.

When he had approached publishers about the idea they had typically accused him of “asking us to find the quickest way to the slaughterhouse”. But the ScienceOpen platform had just agreed to offer publication by endorsement as an option, for a fee to be determined following consultation.

You've reached your article limit

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Reader's comments (5)

Thanks Paul, as co-founder of ScienceOpen (there seems to be a typo in your post) and as researcher who also worked more than a decade in publishing industry I am fascinated about Jan Velterop's concept. We are happy to let it fly!
In my own specialist field, I've had some very good suggestions from reviewers, especially from top end specialist journal (like J Gen Physiol and J Physiol in my area). Glamour journals have not done as well. Nevertheless, in general, I have to agree with Richard Smith. It's become quite obvious that any paper, however bad, can now be published in a journal that claims to be peer-reviewed. As a badge of respectability. "peer-reviewed" now means nothing whatsoever. There will never be enough competent reviewers for the vast number of papers that are being published now. Georgina Mace says "abandoning peer review would make scientific literature no more reliable than the blogosphere". But that is already the case. You have to read the paper to find out if it's any good. All papers should first appear on archive sites, where feedback can be gathered before eventual publication. And when published, all papers should have open comments at the end. It's already happening with a rapidly increasing number of journals (like eLife and Royal Society Open Science). That would mean that it would be essential that people judging you for jobs and promotion would have to read the papers rather than relying on near-useless surrogates like impact factors and citations. Of course the amount of rubbish would be large, but no larger that it already is. And above all, it would make publishing very much cheaper. There would be no more huge charges for open access.
I was fortunate enough to be at the Royal Society meeting where this debate took place. There was no debate. Richard Smith unleashed a torrent of Actual Evidence on the costliness and ineffectiveness of traditional peer-review -- which we scientists are supposed to like. In response, Georgina Mace had nothing but reiterated assertions that peer-review is crucial to science. I transcribed Smith's opening salvo. It was this: "Peer review is faith-, not evidence-based; ineffective; a lottery; slow; expensive; wasteful; ineffective; easily abused; biased; doesn’t detect fraud; irrelevant. Apart from that, it’s perfect." It was a funny opening; but he also justified everyone one of those claims from scientific studies. For those who are sceptical, I recommend Smith's 2010 article "Classical peer review: an empty gun" in Breast Cancer Research 12(Suppl 4):S13, doi:10.1186/bcr2742 -- it can be found at Is Smith right? I share the reluctance of most of the audience to accept the idea that much of the effort we've expended over the years was misspent. My heart says that peer-review HAS to be central to science. Yet my head is persuaded by the evidence. Against my own wishes, I find myself coming to the conclusion that he is probably right. At the very least, Smith has shifted the burden of proof. Those who wish us to continue spending £2 billion per year on pre-publication peer-review (I think that was the estimate that was cited at the meeting, though I'm not sure), then it's on those people to produce evidence that peer-review IS effective, and at a level that justifies its cost.
Peer review is as good as our peerage system is :) As it is the dominant filter that precedes scientific publication at this time, most of us who have tried to communicate our findings and research outcomes will know by experience that reviews can be insightful, helpful, right, or the opposite. For a few years I have seen the process both ways, serving as an Editor at PLOS ONE - each time I receive a good review, I rejoice and use it to communicate with the authors effectively; on other occasions I struggle to find a balance between being fair to the authors, my duty as Editor to the Journal and the scientific community, etc. My first point is - and PLOS ONE exemplifies this very well - that who is the Editor and who are the Peer Reviewer matters: generalising for reasons of debate is dangerous. My second point, follows from the above, and relates to alternatives. One issue that I feel we see more of and which is corrupting science, is the venues where you can pay-and-publish (with a varying degree of "peer review"). Scrapping the requirement for peer review, is likely to make things worse in this respect. The need to read the papers should never go away (in this I am concerned with the intriguing suggestion by Douglas Kell that computers may do the reading and provide useful summaries for us in the future). However, equally relevant is the question of where one browses for titles and abstracts from where to chose further reading - and how one choses from competing calls. I would suggest that good publications venues, offer quality to their readership. It is a sad matter that so much of the publication industry is dominated by financial issues.
Reviewers are human too and sometimes susceptible to prejudice and ignorance. Would it be possible for the model that Wiki uses to be replicated in academic publishing? This means that research outputs will be open to a wider audience and but scrutinised in a more democratic way whilst maintaining academic rigour.

Have your say

Log in or register to post comments

Featured Jobs

Summer Receptionists

University Of Chichester

PhD fellow within Machine Learning for Personalized Healthcare

Norwegian University Of Science & Technology -ntnu

Lecturer in Finance

Maynooth University

Teaching Laboratory Assistant

University Of Bristol
See all jobs

Most Commented

women leapfrog. Vintage

Robert MacIntosh and Kevin O’Gorman offer advice on climbing the career ladder

Canal houses, Amsterdam, Netherlands

All three of England’s for-profit universities owned in Netherlands

Mitch Blunt illustration (23 March 2017)

Without more conservative perspectives in the academy, lawmakers will increasingly ignore and potentially defund social science, says Musa al-Gharbi

Alexander Wedderburn

Former president of the British Psychological Society remembered

Michael Parkin illustration (9 March 2017)

Cramming study into the shortest possible time will impoverish the student experience and drive an even greater wedge between research-enabled permanent staff and the growing underclass of flexible teaching staff, says Tom Cutterham