The ranking of individuals, departments and entire universities has created an incessant drive for more, more, more and faster, faster, faster. More grant acquisition, more papers and faster turnaround. You have only to look at the recent, desperate (and inappropriate) measures taken by certain research councils to try to manage (or stem) the relentless flow of grant applications. But that isn't what I'm going to discuss; it is the craving for faster publication. There have been several recent journal editorials berating their readers for not reviewing manuscripts in a timely manner: for not being better citizens.
As these editors point out and as we all know, the peer-review system is fundamental to academia and relies on a reciprocal arrangement between authors and reviewers.
But these editorials tell only one half of the story, and it is as if these editors have never been mere mortal referees themselves. The urge for greater processing speed is a direct consequence of the sheer volume of material being submitted, especially to the better-quality journals. Scientists (and soon it will be academics in the humanities as well) are under pressure to get their work published in the highest-ranking journals. The result is that it is always worth authors trying these journals first, since there is an element of stochasticity about refereeing; you may just be lucky and have your manuscript sent to poor referees or to your mates. The consequence is a huge volume of material to be refereed and rejected. The other effect is a massive reviewing burden. Many senior academics I know get several requests to review manuscripts each day. It is impossible to do them all; if one did, one would do nothing else.
Saying "no" to a reviewing request results in an (often automated) irritated response. "Well, suggest someone else then!'
Automation! Editors love the web-based automated review allocation system. It minimises their workload (fair enough), but it allows them to score referees in terms of response rate and speed of returning reviews. Many reviewers hate these automated systems. One of the most widely used systems has such a poorly designed front end it is far from clear what the would-be referee is expected to click next to get access to the paper. The people that designed these systems can never have been busy reviewers themselves. As soon as academics become journal editors, they seem to flip quickly into not-understanding mode. "The automated web-based system seems very simple to me," they say, not realising that playing Little Wing seemed very simple to Jimi Hendrix, who composed it and played it on a regular basis.
Once a reviewer has agreed to referee a manuscript, he or she wants access to it as quickly and as simply as possible: one click - that's it. Rarely is this the case. There are often several stages. The other thing reviewers definitely don't want - and this is an insult - is a message demanding personal details before you get access to the paper.
In fact the best systems are those where: (i) you get access to the paper via one click in the invitation email; or (ii) at a few, albeit minor, journals, you get a personal email from the editor. I'm much more likely to respond to a personal, friendly message, and because they also send the manuscript as an attachment - minimum bureaucracy.
Automated systems are often so inefficient that many of my colleagues agree to review a paper only if the editor circumvents the automation and sends it as an attachment. This mild protest seems to be the only way referees can encourage journals to reassess how poorly their automated systems are working. Once, when I had failed - after 30 minutes of fiddling around - to access the manuscript via a journal's automated system, I emailed the editor explaining the difficulty and asking for the manuscript to be sent by email. The editor did so, but inadvertently copied me in on the message he had sent his sub-editor: "Another b*stard that won't use our web-based system," he said. Charming! But they clearly had a problem, and rather like Basil Fawlty beating his non-functioning car, failed to respond appropriately. Within 12 months, however, that journal had switched to a better system - in fact, one of the best.
The single most disappointing aspect of journal editors berating the inefficiency of their referees is the complete lack of understanding that academics are overloaded as well, and often suffer from request fatigue. My policy is to be selective and to do my share, but once I have agreed to referee a paper, to do it within 24 hours. The problem with that is that on their automated scoring device, I'm seen as a good referee (quick turnaround) so I get sent more. A better strategy from my point of view would be to referee the manuscript quickly and have my own automated system that sent the report off the day before the due date.
The pleading by editors for potential referees to move faster won't make the slightest difference. Concrete suggestions are needed. One journal that complained about its referees' response rate asked for reports within 21 days. Returns peaked at 22 days (after an electronic reminder on day 21). Why don't they simply ask for reports to be returned in 10 days?