Peer review: how to be a good referee

Peer review is lauded in principle as the guarantor of quality in academic publishing and grant distribution. But its practice is often loathed by those on the receiving end. Here, seven academics offer their tips on good refereeing, and reflect on how it may change in the years to come

December 6, 2018
Football player falling over
Source: Getty/iStock montage

Every academic wants their papers and grant applications to be reviewed fairly, competently, promptly and courteously. So why is it that, when asked to take their own turn to review, so many academics turn into the infamous reviewer #2 (or, in the sciences, reviewer #3): the tardy, abusive and self-serving misanthrope hiding behind the cloak of anonymity to put a rival down regardless of merit?

Here, scholars from a range of disciplines and countries set down their thoughts on the dos and don’ts of peer reviewing. Issues addressed include how many reviewing requests to accept, when to recuse yourself, and whether it is ever appropriate to reveal yourself to the author, or to request citations to your own papers.

But whatever the failings of individuals, several contributors believe that, above all, it is the system itself that needs to up its game, with the profit motive, the restriction of reviewing to before publication and the lack of institutional rewards for undertaking it all coming in for scrutiny. Peer review may be the gold standard, but it clearly needs some attention if its currency is not to be devalued.

Tennis player lying face-down on the court
Source: 
Getty/iStock montage

‘Be a true “peer”, who is helpful but firm’

We all know the usual reasoning: we peer review to be good citizens, to support dialogue in our fields, and so on. We want high-quality peer reviews ourselves, so we need to provide them for others. But in the rapidly changing climate surrounding research publication, new questions are arising: why provide this free labour, particularly for commercially run entities? Given that reviewing typically doesn’t “count” for anything within our institutions, should we cut back on our commitments? And which requests should we accept?

As the editor-in-chief for a major journal in my field, I depend utterly on excellent referees. From this point of view, what makes a good referee is very clear: timely response to the original invitation, continued contact as the reviewing progresses (especially if any problems arise) and a detailed report that provides truly constructive criticism. The last is perhaps the trickiest point.

Constructive criticism aims to make the proposed paper stronger and more compelling, not to try to get the authors to write a different paper altogether. Proofreading is not the point of refereeing: this should be left to authors, editors and production staff. But it is essential to assess whether the manuscript is participating in ongoing conversations in the field: an extremely high-quality article that is read by no one is not a benefit to anyone, including the authors.

Papers should also be reviewed to make certain that they are adequately and accurately citing previous relevant literature, particularly by underrepresented groups, such as women; empirical evidence shows that their publications tend to be neglected. In many senses, there is nothing truly original left to be written. The best papers position their claims and arguments against the existing literature and enter a dialogue with it. This should be a central consideration for referees, who should point the authors specifically towards the literature – by, for instance, providing examples of relevant citations.

Reviewing well entails not accepting too many requests. Academics use different strategies, such as only doing a certain number per year, or only accepting a new request after they have completed the last. It is also essential to be in the right frame of mind when you sit down to review; if done at the last minute, at the end of a long day, what you produce is likely to be unhelpful.

How you present your review is vital: a report that is extremely blunt or aggressive to the point of rudeness will probably not be deemed usable by many editors, so producing such a report wastes your time and theirs. Avoid being (the perhaps apocryphal) reviewer #2 and instead be a true “peer”, who is helpful but firm. Consider how you would feel if you were to receive the report that you have written and strive to be accurate without being downright mean. Provide the authors with a way forward without doing the work for them: this might mean trying a different journal, reconceptualising the paper’s framing, or simply making some minor clarifications.

Part of your job as a referee is to give evidence-based advice to the editor about how to proceed, as in most cases referees’ reports are advisory. Do consider whether your judgement in this regard may be compromised by a conflict of interest, and if in doubt, consult the editor before proceeding. While reviewing papers by authors well known to you is generally frowned upon, it may be important to do so in smaller subfields if you are one of the few people with the relevant expertise and you believe that you can remain impartial.

Indeed, academics should focus their reviewing efforts on papers in their own subfields, where their expertise is strongest and where they benefit most from the window it gives on to emerging issues. Try not to accept review requests that are lateral to your expertise unless there are clearly extenuating circumstances.

Reviewing also allows you to see models of best practice – and of poor practice – which can improve your own publication habits. To be forced to critique someone else’s work enables you to become more reflective about your own research, particularly in terms of framing, argumentation and evidence.

Reviewing should be regarded not as a chance to lecture others but as an opportunity to learn, and to use your expertise to contribute to the growth of your field. Viewed in this way, it also is a much more pleasant and productive experience for everyone involved.

Rachel A. Ankeny is professor of history at the University of Adelaide, where she is also associate dean of research and deputy executive dean in the Faculty of Arts.


Referee giving yellow card
Source: 
Getty/iStock montage

‘Be constructive but unvaryingly succinct’

Before writing this piece, I consulted widely with colleagues about why they undertake peer review. The answers were pretty much as I expected. Senior colleagues feel that it is an essential requirement of being a scientist to engage in the process – “a moral obligation”, as one put it. Many spend a significant amount of time at the weekend or in the evenings reviewing, since it is “a duty above and beyond the day job”.

They are certainly aware of the flaws of peer review – unconscious bias, conflicts of interest, personal loathing, lack of specialist knowledge, ignorance, not being fully up to date on the topic, and so on. Despite all this, my colleagues still see peer review as the “gold standard of science”. One – and he is by no means the first to do so – paraphrased Churchill’s thoughts on democracy: “Peer review is the worst form of assessment except for all those other forms that have been tried from time to time.”

Interestingly, my junior colleagues are more pragmatic. I quote directly from one succinct colleague: “I review because it is the only way to prevent crap from being published that I would then have to wade through before I [can] get my own work published.” Early career scientists are also much more selective in what they will review, choosing manuscripts from the top journals and only in fields of direct relevance, with the primary aim of “finding out what is going on”. Another difference between senior and junior colleagues is that the latter are much more likely to take rejection personally, and to get upset. Whether this is because age results in the accumulation of wisdom, the denudation of the emotions or the acquisition of alternative forms of gratification remains unclear to me.

Henry Oldenburg, the founding editor of The Philosophical Transactions of the Royal Society of London, is credited with inventing pre-publication peer review in 1665. By the 1830s, all Royal Society publications were subject to some form of external peer review. However, it was not until the mid-20th century that refereeing became the norm for most journals; indeed, Nature instituted formal peer review only in 1967; previously, editors had relied on their own expertise, supplemented with internal discussion.

But editors retain immense power. They decide whether a manuscript should be sent for peer review, to whom the manuscript will be sent, and whether the reviewers’ advice is acted upon or ignored. A senior colleague makes the point that “the key thing about getting a paper published these days is to make the abstract sufficiently idiot proof to get the [section] editor to send it out for peer review”. And while my colleagues express frustration with referees, there is a grudging trust that, on balance, they do a good job. By contrast, there is little faith in editors’ ability to act as qualified “gatekeepers” of published science.

In my own case, I am sometimes frustrated by the manuscripts I am asked to review. Lack of novelty, unclear language, unspecified methodological details, weak experimental design, muddled statistics and obscure data presentation can all elicit this response. If the manuscript’s conclusions are faith-based and not data-based, I also get very disappointed, resenting that my evening’s work has not taught me something new. And I get especially irritated when important data are deliberately buried in the supplementary data section, or when there are obvious technical mistakes. I once reviewed a mouse paper that reported levels of a hormone that a mouse is genetically incapable of making – that experience required a very large, calming whisky.

My responses in such cases are, hopefully, always constructive but are unvaryingly succinct – perhaps even blunt. And I don’t really discriminate between “top” and “mainstream” journals. Good science is good science.

But on the rare occasions when the work is truly novel and brilliantly designed, I am transported to a new plane of happiness, and I gush shamefully in my written response, even if a few more experiments or tweaks are required. I might even admit coyly, in conversation with the author, that I was a referee!

Shortly after my election to one of the UK’s national academies, I joined its election committee. I witnessed for the first time the multiple layers of peer review that eventually lead to the appointment of a new fellow. In view of the intense competition, scrutiny and detailed discussion, I remarked to another committee member that I wondered how the bloody hell I had ever been elected. The individual turned his head, and with a complete absence of humour, answered: “Well, in my case, I wondered why it had taken so long”.

On balance, peer review is an invaluable force for good, but it clearly does not select for humility!

Russell Foster is chair of circadian neuroscience at the University of Oxford.


Football ref giving yellow card
Getty/iStock montage

‘I wish more editors would delete unhelpfully negative comments’

The academic peer review process is both a blessing and a curse. As a journal editor, I rely on colleagues’ willingness to review submissions, recognise promising articles and recommend improvements when required. Peer reviewing can be a generous act of sharing our wisdom and experience – typically without remuneration – with fellow scholars whom we may not even know. It involves more than simply stating whether or not a piece of work is good enough for publication; it also allows us to validate authors’ scholarly efforts and offer them constructive feedback, thus sustaining and enriching our own research field. If done well, it can foster collegiality, encouraging fellow academics and students to produce and publish their best work.

What makes a good peer review? Some of the best I have received (as both an author and journal editor) have been honest yet kind in their feedback, laying out the manuscript’s strengths and weaknesses in generous detail. They have also been constructive in their criticism, offering suggestions for how to make the argument stronger: what additional sources could be useful and which elements of the discussion should be highlighted. More than anything, good peer reviews encourage the author to keep working on their article, to make it as good as it can be, and to see it as a piece of work with real academic value and merit.

Yet being a peer reviewer rarely reaps any professional rewards, as institutions seldom recognise it as a measure of scholarly esteem, or as making a worthy contribution to the research environment. This is so ironic given the massive pressure put on academics to ensure that their own publications are properly peer reviewed.

Moreover, peer review is a system that is open to abuse. The thrill of anonymity allows some reviewers to vent their frustrations and pet peeves on fellow academics’ work. I have received my own share of reviews that left a dent in my confidence as a writer and researcher. Most frustrating were those that focused less on the strengths of my argument or the quality of my writing than on whether or not the reviewer shared my ideological stance. One particularly crushing reviewer from a few years ago recommended my article be rejected because they disagreed with my support for LGBT equality. Thankfully, the journal editor ignored their recommendation and offered to publish my article.

I have also spoken with more than one disenchanted graduate student whose experiences of receiving negative reviews left them questioning their abilities as researchers, and even their right to a place in the academy. A few months ago, a doctoral student told me about the reviews she had received from a highly esteemed journal in her field. One of the reviewers had suggested that her argument was “preposterous” and lacking in any academic merit. “Why are academics so unkind to each other?” she asked me. I had no answer. But I have gotten into the habit of carefully checking reviews sent to me as a journal editor before I pass them on to the author, deleting unhelpfully negative comments or rephrasing them into more constructive feedback. I wish more journal editors would do the same.

Peer reviewing in the humanities is typically double-blind, but a colleague recently suggested to me that while authors should remain anonymous, they should be allowed to know the identity of their reviewers. Would that make for better and more constructive peer reviews? If your reputation as a learned academic and a decent human being is on the line, might you be less tempted to offer a snarky or unhelpful response? Or might such a move make academics even less willing to perform this vital task, for fear that negative reviews could come back to bite them?

Perhaps we could start by making open peer reviews an option for reviewers. That way, authors could properly acknowledge the assistance they get from their reviewers. And reviewers, in turn, might learn some important lessons about collegiality and kindness – virtues that are all too rare in academia but that we academics should value beyond rubies.

Caroline Blyth is a senior lecturer in theological and religious studies at the University of Auckland.


Tennis umpire
Source: 
Getty/iStock montage

‘Think twice before seeking to insert a reference to your own paper’

Accepting the role of reviewer allows you to take part in the communal effort to ensure the validity of what is published, on topics you know and cherish. You help to maintain the standards of your favourite journals, where your own stuff is often published. You get to know about the latest findings before everybody else. And, as an early career researcher, you learn about how peer review works – which can be useful when you enter the game as a corresponding author.

Personally, I also enjoy reviewing because it forces me to spend some time on a paper instead of just screening it quickly and running to the next meeting or lecture. This sometimes gives me inspiration for my own scientific programme.

But there are also bad reasons to engage in peer review. These typically revolve around using the power it confers to gratify your ego and promote yourself. One example is to force the author to include gratuitous citations to your own papers. It is not that requests to cite your own papers are always wrong. On the one hand, you have been selected as a reviewer because the editor thinks you are an expert on the topic. You know the literature very well, and you probably have published on the topic (that’s often how editors find you!). So you must fight your impostor syndrome and trust your judgement on whether the submitted paper builds on and cites the appropriate existing literature. If you believe that it doesn’t, you must say so.

However, think twice before seeking to insert a reference to your own paper. Keep in mind that authors don’t have the obligation to cite every paper in the field, nor the latest paper. Keep two questions in mind, and only ask for an additional citation if the answer to one of them is a loud yes.

Is the description of the state of the art in the introduction incomplete without this citation?

Would the citation enrich the discussion by confirming or contradicting some of the interpretations?

Above all, keep in mind that the mission you have accepted as a reviewer is to help ensure the quality of the scientific discussion. Nothing more (but nothing less!). While it may be upsetting to see a colleague ignore your work, it is also upsetting for colleagues to receive unjustified requests for extra citations, and good editors blacklist reviewers who do this. Better to ask yourself why the authors missed your paper. Perhaps you should publish in more relevant journals? Perhaps you should be more prominent at conferences or on social media?

On a more cheerful note, I want to share a personal anecdote. I was once asked to review a paper for one of the “shiniest” journals there is. It was on the same topic as my PhD, from which I had graduated a year before, but the authors did not cite any of our contributions on the topic. Self-esteem apart, I truly thought this was a problem.

After many hesitations, I decided to include a shy suggestion to cite one of our papers, camouflaged in the middle of my detailed, five-page report. One month later, I came across the four other anonymous reviewers’ reports. Three of them openly criticised the authors for having completely overlooked our body of work, and asked them to correct that.

That day, I felt proud and happy about my work as never before. Peer recognition is sweeter when it is not forced.

Damien Debecker is a professor in the Institute of Condensed Matter and Nanosciences at Université Catholique de Louvain, Belgium.


Sports commentators
Source: 
Getty iStock/montage

‘Determining the fate of a paper via the judgement of a small, select bunch of peers is well past its sell-by date’

Arrgghhh. A second reminder about that paper I agreed to review has just appeared in my inbox. Remind me of just why I agreed to do this? I’ve got a stack of undergrad lab reports to grade, a grant application to write, tutorials and “one-to-one” meetings to organise, undergrad and PhD projects to supervise, and the documentation for an equipment tender process to sort out within the next couple of days. And let’s not even mention that paper of our own that’s been gestating for over a year now and that stubbornly refuses to move off my “to do” list...

So why do I agree to review manuscripts when I have more than enough on my plate? After all, I’m not going to get paid for doing the review – despite the eye-wateringly high profit margins of many academic publishers. Nor will my efforts contribute to the research profile of my department or university. Even the undergraduates with whom I recently discussed and dissected peer reviewing (as part of my university’s “politics, perception and philosophy of physics” module) were rather taken aback to be told that it generally relies on unpaid, unrecognised volunteers.

On average, I review about six manuscripts a year (although I turn down or pass on about double that number of requests). I feel duty bound to do so; the entire scientific process rests on peer review and I am acutely aware that my colleagues have very often invested considerable time and effort in reviewing papers that my group has submitted to journals. In the vast majority of cases, moreover, their efforts have resulted in improvements. Sometimes the improvements are dramatic, in one case transforming the manuscript beyond recognition.

That said, peer reviewing also has deep flaws. Reviewing a paper well often takes a considerable amount of time; I’d estimate about six hours, on average. On occasion, however, it’s absolutely clear from even the most cursory glance at the data that the authors are “over-reaching”, and a review can be written and submitted in well under an hour.

It is frustrating, however, to invest time and effort in reviewing only to find that some colleagues are asleep on the job; there have been a number of examples where blatant, “poke you in the eye” data manipulation has passed through the net in my field of nanoscience. The most egregious in recent years was the infamous “nano chopsticks” paper published in the prestigious American Chemical Society journal, Nano Letters , in 2013. ( Google it.) This was “cut-and-paste” science; outlandish fraud of a type that even a primary school child could identify, but it was picked up not by the reviewers or the editor but by a blog, and subsequent social media traffic.

On balance, my view is that determining the fate of a paper via the judgement of a small, select bunch of peers – or, worse, a single, solitary reviewer – is a practice well past its sell-by date. Sites such as PubPeer open up the reviewing process to a much wider audience, “crowd-sourcing” the assessment of a paper. There is particular scope to combine post-publication peer review of this type with open access publishing, and it is clear that academic science publishing is evolving in this direction (to the consternation of many traditional publishers).

One key issue remains with open peer review, however: anonymity. Of course, anonymity has a key role to play in protecting the source of criticism, who may well be an early career researcher whose future in academia (and beyond) may otherwise be badly affected by their critique of a world-leading group’s research. In the worst-case scenario, anonymity is essential to protect the identity of whistleblowers who highlight fraud. But, as I have previously argued in Times Higher Education (“Should post-publication peer review be anonymous?”, Features, 10 December 2015), it can be counter-productive in less extreme cases. The fact that the authors of traditional anonymous reviews are known to the editors may not always restrain those reviewers from being discourteous and making unreasonable and unethical demands, but there remains a universe of difference between moderated anonymous feedback and the free-for-all that unfolds in a contentious, or even not-so-contentious, PubPeer thread, where full anonymity is guaranteed.

Whether I would make more or less time for reviewing in such a post-publication future is open to question, and I am sure that many others would be similarly queasy. Leaving reviewing to scientific trolls, however, would be worse than the current state of affairs. Peer reviewing is a dirty job. But if someone has to do it, better that the burden be shared by those who would rather the mud wrestling be kept to a minimum.

So, either way, those undergrad lab reports are just going to have to wait.

Philip Moriarty is professor of physics at the University of Nottingham.


Cricket match
Source: 
Getty/iStock montage

‘If there is no evidence of revenue being used to support academic activities, I am more likely to refuse to give my labour freely’

Peer reviewing for journals is one of a number of collegial activities for which academics are not recompensed, but which we do as an aspect of maintaining the academic ecosystem.

However, increasing concerns are being raised about how that sits in a world of ever-escalating subscription costs for journals – stakes in many of which have been sold to a handful of highly profitable transnational corporations by the individuals, departments and professional associations that were previously their sole owners.

For the most part, those academic collectives plough their share of the revenue back into the academic ecosystem. For example, the Sociological Review Foundation, for which I am a trustee, uses it to fund activities for early career scholars, including a postdoctoral fellowship, as well as investing in the sociological community more broadly. Other journals, however, direct a significant share of their income into the pockets of their owner-editors.

When asked to peer review, I try to find out which of these categories the journal falls into. The opacity of such arrangements makes it increasingly difficult to ascertain, but it is usually possible to discover if the journal is owned by a collective of some sort. If the journal appears to be part-owned by an individual and there is no evidence of the revenue being used to support broader academic activities, I am more likely to refuse any requests to give my labour freely.

Another way in which the collegiality of peer review is being undermined is by the intensification of publication requirements by university management combined with a failure to recognise the need for reciprocity in this process. It is significant that few universities provide workload points for peer review activities at the same time as encouraging academics to publish in peer reviewed journals with high impact factors.

A simple calculation can be made. If a senior academic publishes two articles per year, then at a minimum, each will be considered by one editor and two or three peer reviewers. That represents at least six peer review “donations”. On the assumption that the academics will also receive some rejections, and some recommendations to revise and resubmit, we might conclude that each senior academic involved in this system should be reviewing about 10 articles per year, in order to “pay back” those donations. That rationale is only reinforced by the consideration that not all submitting authors are in a position to act as peer reviewers – yet it is not clear how many senior academics currently fulfil such expectations.

Now the publishing system faces the prospect of Plan S, which would mandate that all research funded by a range of global funding bodies, including UK Research and Innovation, the Wellcome Trust and the Bill and Melinda Gates Foundation, must be made immediately available open access. This would likely only be financially possible through the introduction of author payment charges for publication. If, as expected, that shift still results in lower revenues for journal publishers, one question is where that leaves the professional associations that derive close to 75 per cent of their income from journal revenues.

Moreover, in the context of mandated open access and author payment charges, journals will seek to maximise their revenue through maximising the number of articles they publish. Rigorous peer review will potentially be an obstacle to that, and it is not clear what its place will be.

Gurminder K. Bhambra is professor of postcolonial and decolonial studies at the University of Sussex.


Sports judges under umbrellas
Source: 
Getty/iStock montage

‘Whether journals in a world of preprints will continue to carry out pre-publication review is open to question’

As scientists, many of us have a love/hate relationship with peer review. Most of us believe it is necessary to maximise the integrity of the scientific literature, yet we chafe when our own manuscripts are criticised. And we complain when reviewers take too long to review our manuscripts even as we procrastinate about completing that review request sitting in our own inboxes.

There are good reasons to say yes to those requests, such as building a reputation with editors of a journal in which you might want to publish and to preview cool new science. However, the increasing volume of manuscripts coupled with ever greater pressure on scientists’ time means that it is necessary to turn most of them down if you want to get any of your own work done. Finding the right balance isn’t easy.

For their part, journals continue to struggle to secure appropriate reviewers (especially during the dreaded summer and holiday periods of high submission and low availability) even as initiatives such as Publons have been established in the hope of giving more recognition for a task that is usually performed anonymously.

Further challenges to peer review are presented by the accelerating use of preprint servers in biomedical sciences, replicating the well-established practice in physics. Preprints are posted on bioRxiv after minimal oversight by “affiliates” (of which I’m one of many), who check only that they are scientific.

Journals’ lucrative existing role as science’s middlemen will not be readily surrendered, and it is plausible that the top journals at least will be able to continue leveraging their brands to select and curate the best preprints, conferring on them a badge of quality that, notwithstanding pressure to stop judging scientists on the basis of where they have published, is likely to remain of importance to recruitment and promotion committees for years to come.

But whether journals in such a world will continue to carry out pre-publication peer review is open to question, not least because while fewer journals would mean less demand for reviewers, there will be less incentive for reviewers to take part. After all, if a manuscript can be accessed online immediately after submission, reviewers will no longer enjoy the opportunity to get a sneak preview of the latest research. Payment of reviewers could compensate but it would bring with it other issues, such as conflicts of interest.

Journals may decide, instead, to rely on post-publication peer review by archive users. Most preprint servers allow for comments (and subsequent corrections of the preprint) to be added. There may well be further developments in this regard, including inclusion of new data in a review as a means to extend, clarify or rebut findings, and it is possible to imagine a journal approaching an author of a preprint to offer to publish it conditional on such points being addressed.

One issue would be reviewer identification. The fact that post-publication reviewers are typically identified may attract some reviewers in search of greater exposure. But it probably puts off far more. Given the option, most reviewers choose not to make their identity known for several reasons, including potential backlash from disgruntled authors and fellow reviewers – even though most recognise that science would benefit if this largely hidden but substantive and valuable literature were exposed to daylight. Early career academics in particular are probably wise to be wary in this regard.

This problem could be solved by providing anonymity to online reviews. However, it is unclear how much time busy scientists would be prepared to put into post-publication review when the benefits of previewing or building a relationship with a particular journal are no longer in place. While it seems unlikely that many people would be motivated to submit the kind of comprehensive review typically submitted during today’s pre-publication process, the hope might perhaps be that enough people would be willing to comment briefly, on aspects of the paper that particularly strike them – so that the whole approximates to something substantive. Previous experiments with post-publication peer review are not encouraging, but were competing with established pre-publication review.

While the demise of expert pre-publication review by peers might be greatly lamented by the dreaded reviewer #3, the damage it would inflict on true scientific progress is open to question.

Indeed, perhaps it is time to ask whether peer review itself warrants reviewer #3’s unique brand of attention.

Jim Woodgett is director of research and a senior investigator at the Lunenfeld-Tanenbaum Research Institute, Toronto.

POSTSCRIPT:

Print headline: Blowing the whistle on bad refereeing

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored