Earlier this year, the former British Medical Journal editor Richard Smith called, in these pages, for pre-publication peer review to be abolished.
“Peer review”, he wrote, “is supposed to be the quality assurance system for science, weeding out the scientifically unreliable and reassuring readers of journals that they can trust what they are reading. In reality, however, it is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.”
Far better, he said, to just publish all papers online and let “the world…decide what’s important and what isn’t” (“Ineffective at any dose? Why peer review simply doesn’t work”, Opinion, 28 May).
In an effort to broaden the debate, we asked six academics from across the disciplines whether their own experience bears out Smith’s observations – and whether they share his conclusion.
No one is short of horror stories. Some contributors have received comments that are outright abusive, while our two scientists are very concerned about peer reviewers’ habit of expecting the moon and asking for legions of additional experiments that would take enormous amounts of time and money to carry out.
But a common theme seems to be that peer review should not be seen as a monolith to be either worshipped or cast into the abyss. Rather, it is a patchwork of millions of judgements by individuals who vary widely in their conscientiousness, realism, constructiveness, manners and ethics. And none of our contributors – who, of course, are also reviewers themselves – is willing to conclude that the weakest links are entirely beyond strengthening.
Perhaps Andrew Oswald, professor of economics at the University of Warwick, best sums up the prevailing mood. Paraphrasing Samuel Johnson’s famous remark about London, he concludes that “the scholar who is tired of referees is tired of life”.
But Johnson is also said to have noted that the “true measure of a man” – and presumably of a woman, too – “is how he treats someone who can do him absolutely no good”. Perhaps that is the quotation on which anonymous peer reviewers should reflect most earnestly.
All referees will tell you that they are open‑minded, write gently and are on the lookout for work of fabulous creativity. But they aren’t
I can remember only two referee reports that were extraordinarily unpleasant.
The first was sent to me around 1980, when I was finishing my PhD and submitting papers to journals. It opened with: “Reading this manuscript makes me question Oswald’s intellect.” The second happened a few weeks ago, when a referee’s report on a manuscript of mine began with: “The title of this paper is pretentious.” After a start like that it no doubt went steadily downhill from there, but I have not bothered to read the rest, and may never do so.
This sparse number of horror stories is not, I should explain, because in those decades only two referees have been unpleasant: I have had hundreds of rejection letters. It is just that, as a deliberate strategy that I would recommend to any young scholar, I do not commit nasty comments to memory.
If the majority of referees like your research, you can be certain that you are doing boring work. To push forward ideas that will matter to the world, you and I may as well accept that we are going to have to upset people and crawl through the trenches of muddy carping and explosive criticism. All referees – and I suppose that must include me – are subconsciously looking for manuscripts that play back to them ideas they already find familiar and palatable, and ones that lend support to their own prior research. That is bad and sad. However, it also happens to be human.
Of course, all referees will tell you that they are open-minded, write gently and are on the lookout for work of fabulous creativity. But they aren’t. It is hard for a human being to absorb ideas that are of first-order originality; such ideas, by definition, barely compute. Moreover, it is emotionally difficult for reviewers to be charitable about others’ manuscripts. Schadenfreude survives, if quietly.
Was Richard Smith sensible to argue that we should jettison peer review in publishing? I thought his article was a terrific one, and the future may evolve to make him right. However, on balance, I do not see peer review disappearing during this century. First, in economics alone, there are 20,000 new journal articles every year. If there were no journal system to give early warning of quality, it would be even harder – and highly inefficient – to spot the stars among the torrent.
Second, perhaps we should spare a thought for the brilliant individual who is stuck, through some early stroke of fate, in a tiny group in Nowheresville, Antarctica. Without a journal system, the ideas of that person might have no chance at all. Although I have sometimes despaired of the ability of academic journals to accept papers that are truly novel, the journal system does work to a limited degree. Some important stuff makes it past the grey-minded reviewers. If there were no peer-reviewed journals, I bet researchers would pay even more attention than they currently do to status symbols such as a university’s name and nation.
The simple truth is that the scholar who is tired of referees is tired of life. Journal reviewers are going to keep on being incoherent, making fundamental mistakes, being childlike and gratuitous, forcing us to cite their irrelevant articles, thinking that after 20 minutes they understand our work better than we do after 20 months, leaving out the crucial references they claim have already shown the finding, and all the rest.
Between you and me, the only good referee reports I ever saw were written by me and by – oh, wait, no: there isn’t anyone else. This is how we all feel, and in some ways it is surprising how well a voluntary system of refereeing, based on a form of honour system, actually works.
Andrew Oswald is professor of economics at the University of Warwick.
Peer review is a good way of seeing what the next generation is doing, but I am becoming increasingly selective in which invitations I accept
The worst comment I received on anything I have ever written was in the form of a question: “What is this muck?” It was an essay for a journal that claimed to be pioneering new research in cultural and media studies. Doubtless writing about a feminist rock star was just too much for someone!
In principle, it seems like a fine idea for work submitted to a journal, publisher or funding body to be assessed anonymously by independent experts. Surely it is more equitable and transparent than individual editors and research managers taking subjective decisions, as was the norm in my early career.
But, as with all the ventures by academics into quantifiable systems of assessment, it has grown into a monster. Academics – and those academics transformed into “managers” – have learned how to play around with peer reviewing and turn it to their advantage in determining who wins and loses in the local promotion stakes. And their use of peer review is used by open access publishers to excuse the venality of asking for money up front from authors (something that used to be derided as vanity publishing).
That everyone now needs to confirm that they have been peer-reviewed, even if the piece in question was published by the smallest online journal, means that the task of reviewing is now gigantic. In the first six months of this year, I was asked to peer-review more than 20 articles, half a dozen book proposals, and so many bids for funding that my memory fails me. Not all come through anonymously, and if you work in a specialist area, you can often identify the writer.
Of course I don’t agree to peer-review everything I am asked to, but I am told by editors and publishers that such is the demand to publish that they are becoming desperate to find people willing to do the assessments. Because it isn’t just a matter of reading 20-odd pages of typescript and passing an opinion. No, you have to log on to complicated websites and answer a host of questions. This can take ages, especially if you are asked to provide information that can be relayed back to the writer as distinct from a box for non-disclosable comments. The complexity of some of the processes may explain the logjams and long delays in getting feedback to writers, some of whom are kept waiting for months.
Attempts have been made to raise the status of peer-reviewing to something that can be put on a CV. An example of this is the fanfared creation a few years ago of the Arts and Humanities Research Council’s “Peer Review College”: a publicly identified body of about 1,500 individuals who referee applications for AHRC funding and from whom the research council’s assessment panels are largely drawn.
I went along to one of their training days, during which we were divided into groups and asked to assess previously submitted applications. In my group we failed two and agreed that two were excellent. It then turned out that our failures had already received some £750,000, while our excellents had failed to win anything. So much for expert opinion, I thought. After receiving a string of half-baked proposals and struggling with the problems of the website, I resigned and wrote to the AHRC explaining why. I could also have mentioned that some other funding bodies pay their peer reviewers – which is some small compensation for their time.
I shall continue to peer review because it’s a good way of seeing what the next generation is doing, but I am becoming increasingly selective about which invitations I accept, and I know I am not alone in this. After all, which peer reviewer wants to be reduced to asking: “What is this muck?”
Susan Bassnett is professor of comparative literature at the University of Warwick.
Transparency would make it possible for people to take more credit for good reviews
In theory, peer review is a great idea, but as it generally operates, it sucks. The principal problem is the typical anonymity of reviewers. This can provide a cover for suppression of data that conflicts with the reviewer’s own results, sabotage of competitors’ funding applications, filching of research ideas or even the pursuit of petty vendettas. I think each of us probably knows of at least one fellow academic they would not trust to review their work with full integrity under cover of secrecy. I certainly do.
The lack of professional credit (or financial reward) for this vital activity also means that peer review is too often carried out in a hasty, superficial way. I’ve seen some seriously shoddy reviews, some so cursory that they show no evidence whatsoever of in-depth consideration – or, indeed, of appreciation of the subject at all. I also know of several cases where the task has been delegated to a junior researcher but submitted in the name of a senior academic. Unwilling to do this, I’ve had to guiltily refuse several reviewing requests because I did not have time to do the job well.
People might carry out more meticulous evaluations and be more careful of what they wrote if they had to stand up and be counted. One flaky review can scupper a publication, even when other reviewers are positive – which brings me to what is if not the worst then certainly the most infuriating review I have ever received. We submitted a paper addressing a particular question that was amenable to the analytical approaches, samples and dataset that we had available. Two reviewers described the work as well-designed, well-performed and well-written. However, a third reviewer dealt a killer blow: “I have a fundamental problem with this paper.” It turned out that they were not reviewing the work that we had actually done but were pining for something entirely different.
“The 1 million dollar question on this research topic is: ‘does lower [measure X] cause metabolic diseases or is [measure X] decrease a consequence of metabolic disorders?’” It made me want to shake the reviewer in question and say: “Thanks, mate – we already knew that this is a vitally important question – but, along with the rest of the world to date, we are not currently in a position to address it. If answering ‘the 1 million dollar question’ had been possible from the dataset we had, we certainly would have done it – but then it would have been a completely different paper, and we would have submitted it to Nature or Science!”
Almost as frustrating is when you are asked by a reviewer to carry out some follow-up experimentation that is completely impractical in the timescale given, or is so very expensive it would require a new grant to pay for it.
Then there are the suspicions of corruption: the sense, for instance, that some authors get an easy ride from self-chosen reviewers, while some academic editors get an even easier ride from other academic editors – to whom they return the favour.
The secrecy of the process also makes it difficult to investigate the suspected biases in peer review, such as the misogyny exemplified by the recent example where it was suggested that adding a male co-author to a paper by two female researchers would improve its quality (“‘Sexist’ peer review causes storm online”, 30 April). If reviews (along with author responses) were routinely published alongside papers, we could determine whether this was an extreme, isolated incident, or was representative of something more systematic.
Transparency would also make it possible for people to take more credit for good reviews – potentially attracting more competent people to commit to doing diligent and honest reviews within a reasonable time period. Anyone who has acted as an editor or worked on a funding application board knows how difficult it currently is to find such people. They are just too busy with more urgent priorities.
Alex Blakemore is a professor of human molecular genetics at Imperial College London.
I favour peer review over metrics. Better to be judged by a panel of one’s peers than by citations
I once had an anonymous reader’s report so damning that the journal editor chose to paraphrase rather than attach it, saying simply: “It suggests that this article depends on deconstructive ‘turns’ to disguise the fact that it isn’t actually about very much.”
The editor’s conclusion was that “the tone of [this journal] is rather more purposeful than will suit you”. I went on to publish the essay elsewhere, to favourable reviews, and am now on the editorial board of the journal that rejected it. I tell my postgraduates that persistence pays.
As a recent panel member for the research excellence framework, I favour peer review over metrics. Better to be judged by a panel of one’s peers than by citations. With the REF, however, we were assessing research already in the public domain. What about work snuffed out prior to publication – or earlier, at conception? Peer review of grant applications determines whether research reaches pre-publication.
Three things make me open-minded as a peer reviewer. First, teaching creative writing made me aware that emerging voices need to be listened to with care. When peer review fails, it is often through a narrow sense of what is permissible.
Second, literary theory taught me that dissenting voices risk being stifled if too rigid a view is taken of what constitutes a valid critical approach. Barbara Christian, in her contribution to The Norton Anthology of Theory and Criticism, complains about those philosophers who indulge in “the kind of writing for which composition teachers would give a freshman a resounding F”. Ironically, as a student, Jacques Derrida submitted a paper on the topic of “time” to his director of studies, Louis Althusser, who passed the essay to his colleague, Michel Foucault, saying, “I can’t grade this.” Foucault’s response was, “Well, it’s either an F or an A+ .”
Third, extensive experience of reviewing published work – about 250 reviews to date – confirms for me the need for a flexible response. I have written reviews that sparked controversy and correspondence, and in one case a “right to reply” exchange. But however demanding peer review is, it rarely takes as long as the work under consideration.
The Oxford English Dictionary defines “peer review” as: “The review of commercial, professional, or academic efficiency, competence, etc., by others in the same occupation”. Examples include “the evaluation, by experts in the relevant field, of a scientific research project for which a grant is sought”, and “the process by which an academic journal passes a paper submitted for publication to independent experts for comments on its suitability and worth”. Already we see a blurring between previewing, reviewing and refereeing. Interestingly, the OED cites a comment from the journal Nature in 1977 that “publishing a book is a way of avoiding peer review”, as though monograph writers were draft dodgers.
Can peer review become a blockage rather than a filter, encouraging uniformity of expression, discouraging diversity? As a young editor, I was forced by a reader’s report to drop a contributor from a collection on what I felt were political rather than scholarly grounds. If peer review is open to abuse, then anonymity is part of the problem. Pre-publication peer review raises issues not found in the reviewing of published work.
Richard Smith suggests that “‘top journals’ select studies that are new and sexy rather than reliable”, before declaring that “peer review is anti-innovatory because it […] depends on approval by exponents of the current orthodoxy”. By contrast, a letter in The Spectator in 1996 declared that “this process of peer review is designed to weed out glitchy papers and it generally works rather well”. But what if Smith is right? What if the weeding goes too far? What if it pulls up some of the flowers, too, so that instead of a hundred of them blooming, only a handful can?
Willy Maley is a professor of Renaissance studies at the University of Glasgow.
One set of reviews included a requirement that we repeat the study using a distinct line of mice. This would have required two years of work
Ask a scientist whether they have ever received a terrible review of their submitted work, then stand back and prepare for a deluge of anecdotes. Turn the question around and ask them how good they think they are at reviewing the science of others and the dichotomy is striking.
The most egregious examples of poor peer review I have come across relate to grant applications. In those instances, one often sees personal bias and efforts to shape the world in the reviewer’s image, which, fortunately, are rarely seen in reviews of papers.
To falsely paraphrase Churchill, “peer review is the worst form of review, except for all those other forms that have been tried at other times.”
Yet those “other times” did not benefit from the immediacy and accessibility of the internet, and there are emerging experiments in post-publication review (such as F1000 Reports) and preprint servers (such as arXiv and bioRxiv) that are picking up steam.
But before traditional peer review is written off, it may be informative to understand its weaknesses and attempt to mitigate them. In my view, the biggest problem is a lack of clarity of editorial decisions. Peer reviewers naturally vary in their rigour and style, but too many approach the process as a challenge to pick as many holes and to suggest as many new experiments as possible. This applies in grant application review as well. Unless one has a strong editor or chair able to discern what is reasonable and constructive, as opposed to what is “make-work” or simply designed to retard the process, the submitter is often left with a laundry list of expected revisions.
One recent set of reviews my lab received for a project that described three years of work included a requirement that we repeat the entire study using a distinct line of mice. This would have required a further two years of work and, given that the resubmission window was six months, was clearly not feasible. Moreover, the original experiments would have needed to have been repeated at the same time as the new ones. So I “self-rejected” the manuscript and took it elsewhere. This isn’t to say that the reviewers’ opinions were without merit, but if the editor agreed with the reviewer, the recommendation should have been “reject”, not “revise”.
I also experience the other side of the coin as a receiving editor for a cancer-focused journal. Quite often reviewers disagree about a manuscript, and this is where the editor must be clear in their instructions and identify what parts of a critique need to be addressed. If the study can be improved with a reasonably focused amount of work, then revision is fine. However, many papers are sent back for yet more revision. Sometimes multiple rounds of review can take almost a year – at the end of which the paper can still be rejected.
We also fall prey to the cascade of submissions where a higher “impact” journal is initially attempted, followed by a descending series of journals in the hierarchy of the field. The pressures to publish in higher profile journals are intense and fuel serial submissions even though journal impact factor does not reflect the quality or impact of a given paper.
I also think impact factor chasing is the principal reason – more than advances in technology – for the trend in cell biology publications for increased amount of data. The top journals control the market and have the reviewers do their “dirty work”. We, the researchers, are complicit.
But all is not lost. The EMBO Journal touts that 95 per cent of its invited revised manuscripts are published, ensuring that authors are not being led on a wild goose chase. Transparency is being increased by the likes of EMBO Press and eLife, which post reviews along with accepted papers. Obsession with novelty is being addressed by mega-journals such as Plos One that ask reviewers to assess only the soundness of a study. And the advent of preprint servers removes the delay to accessibility.
Just as drivers are also pedestrians, authors and reviewers need to see the other’s perspective and offer constructive criticism. Together with increased editorial clarity and realistic expectations, traditional peer review can be saved.
Jim Woodgett is director of research and senior investigator at the Lunenfeld-Tanenbaum Research Institute in Toronto.
I even foresee some academics using Freedom of Information laws to flush out the identities of abusive reviewers
“This article has a potentially interesting basic premise which could have become something with a considerable amount of work and thought. Sadly the execution is woeful. The article lacks coherence, proper length, full awareness of the literature, any kind of discernable [sic] argument, the case studies are exceptionally poor and seem to add nothing to the main thrust of the article. There is little conceptual awareness here, the writing style is terrible, at times the grammar is abysmal, at others the writing is sub-tabloid. In sum, then, the article reads like a semi-decent undergraduate essay but it has no originality, rigour or significance and is not suitable for publication in an academic journal.”
This is apparently what constitutes peer review in some academic journals. Never mind what it says about the psychology of the person writing it, let us just examine it as a piece of criticism. What is it for? What fruitful knowledge does it provide? Where does it point to for improvement?
In most areas of professional life, particularly in the university sector, bureaucracy holds sway. This fills academics with dread: the endless form-filling, the peer observation of teaching, the strenuous efforts required to achieve good internal and student feedback. The question of whether all this really improves teaching is moot, but at least academics are now more accountable to students than at any time previously.
Yet blind academic peer review trundles on in essentially the same way it did decades ago. The above review, which I was unfortunate enough to receive recently, was so negative that the journal that commissioned it told me that it would be unlikely to accept my article even if the other review was positive. This in effect gave a veto to one poisonous review that lacked any intellectual content.
A further problem is the effect such a choleric rant might have on a younger academic. Imagine this was the first review you ever received. You might be discouraged from writing another article for years.
Blind peer review should be fundamentally reformed. The lack of accountability is out of place when even job applicants can ask to see their references (and this was instituted to stop character assassination). As things stand, I even foresee some academics using Freedom of Information laws to flush out the identities of abusive reviewers. If peer review is to be kept secret, journal editors should apply some basic standards and establish benchmarks for a quality review. One of them should be to simply ignore demented referees’ reports.
By the way, if you are wondering, my paper was eventually published elsewhere, being commended for originality and writing style.
The writer, who wishes to remain anonymous, is establishing a website for particularly bad examples of peer review. Send your worst anonymously to email@example.com