Peer review is frustrating and flawed – here’s how we can fix it
What would peer review 2.0 look like? Mark Humphries offers ways to optimise the process for better efficiency and research outcomes
Elsevier helps researchers and healthcare professionals advance science and improve health outcomes for the benefit of society.
You may also like
Fancy a torrent of war stories? Just ask an experienced researcher about having their papers reviewed. Sucking air through their teeth, they’ll tell you about that time the reviewer said their paper was “simply, manure” or complained that “the writing and data presentation are so bad that I had to leave work and go home early and then spend time to wonder what life is about”.
In the cold light of day, peer review can seem an absurd way to decide whether a paper is worth publishing. The sample size is tiny, just two or three reviewers, each bringing their own ideas of what questions are important, and what a good study looks like. Consistency among reviewers can be poor, as seen in the scoring of conference papers and grants. Cloaked in anonymity, the occasional unprofessionalism of reviewers is both fodder for endless memes and evidence of their bias against minority groups.
- Understanding peer review: what it is, how it works and why it is important
- What to do when an academic journal rejects your article
- Spotlight: Tips for success in academic publishing
For many journals, reviewers are also gatekeepers, asked to judge the novelty and importance of the work, when novelty is arbitrary, and importance is often clear only in retrospect.
Peer review is also a thankless task. It’s done pro bono, and there’s an expectation that you’ll do your share of reviewing when the time comes. Invitations to review a paper are increasing year-on-year, but the proportion of acceptances is falling, and editors frequently complain of having to send 10 or more invitations to find one or two reviewers. Squeezed in around other commitments, reviews often betray the speed at which they are written, and can be incoherent in their argument and unclear on advice for revisions.
Here are a few proposals for how to fix this mess, and get to version 2.0:
Journals should publish all reviews
Knowing that your words will appear in public, attributed or not, can only serve to focus your attention more closely on what you say and how you say it. Publishing reviews requires little overhead, yet few journals publish them all (and voluntary publication only provides an escape clause for critical reviewers). Sure, for some reviewers, the promise – or threat – of their words being published will make no difference. But every little helps.
Reach a consensus
Journals could send to authors a single consensus review, based on the editor’s and reviewers’ individual reviews, that also states clearly what the authors must do for the paper to be accepted. Done right, this would force consistency, temper extreme or unprofessional comments, and improve the chances of producing a readable, coherent text.
That it might also reduce the likelihood of being asked for more pointless experiments is a bonus.
Doing the consensus right, though, relies on editors committing to it. The process at eLife, for example, has been exemplary. But from another journal, we’ve had a consensus letter that was two reviews copied and pasted together.
One objection is that a consensus review slows things down a little – but it’s hard to see why that’s a bad thing. Indeed, research fields with slower review processes rate their quality more highly.
Review only after publication
With post-publication review, you get the expert opinion and a chance to improve the paper, but not the gatekeeping or judgements on novelty or importance.
But it raises questions: who will do the review? Without the need to review before publication, the motivation to review goes away.
And what about prestige? Some journals, notably F1000Research, have been doing post-publication review for years, with little impact on mainstream publication, having gained little prestige. Arguably, the prestige of a journal stems from the fierceness of its pre-publication peer review; we often assume that a paper that has survived the editors and reviewers at a highly selective journal is worth reading.
There is a halfway house: the overlay journals. These act as portals to preprints hosted on a well-established platform, typically arXiv. The idea is simple: you publish the preprint, submit a link to an overlay journal, and the journal arranges reviews. Then you revise the preprint, and once accepted, it is formally published by the journal – given a DOI, listed on subject indexes. The whole time, anyone can read and cite the preprint and can make up their own mind about its worth, without the gatekeeping of peer review.
Decoupling reviewing from journals
What if a paper were reviewed once for a group of journals, and the authors then revised and submitted it to the journal of their choice?
The ambitious ReviewCommons platform is testing this idea with 17 well-respected biology journals, including those from EMBO and PLoS. In this process, authors submit to the platform, and ReviewCommons handles the reviews and journal submission. This solves many problems at a stroke: gatekeeping goes away, as the reviewers aren’t assessing the fit or quality for a specific journal; time is saved by not duplicating the review at many journals independently.
Ethics, checklists and other nudges
Small tweaks can improve the process, such as prompting reviewers to think ethically about their review and its language. The Committee on Publication Ethics (COPE) has long had a code of ethics for peer reviewers. Requiring reviewers to confirm that they’ve read that code, or something similar, before submitting their review would be a start.
Submission checklists could work harder to help catch simple errors. One simple addition: have you finished the damn paper before submitting it? I’ve reviewed papers with unfinished sentences, missing paragraphs, figure panels and figures, and figure contents not matching the caption or text. This carelessness removes trust.
This is important because peer review depends on trust between reviewers and authors – that the experiments were done as described, that the data are real, that the analyses were done and accurately reported. The less the reviewers trust, the more they will question a paper, the more they will seek to find fault – and the more adversarial and flawed the process becomes.
Peer review 2.0 can solve many problems, but not the basic one: do unto others as you would have them do unto you.
Mark D Humphries is a professor of computational neuroscience at the University of Nottingham.
If you found this interesting and want advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the THE Campus newsletter.