Concerns about the effectiveness of double-blind peer review are unfounded, according to the authors of a study in which as many as nine out of 10 reviewers were unable to correctly guess an author’s identity.
The experiment analysed papers submitted for three international software and programming conferences in 2016, for which authors were told to omit information about themselves and block, to the best of their ability, any identifying information within their papers.
Reviewers were asked optional questions about whether or not they thought they knew the identity of at least one author and, if so, to make a guess.
The results, published as “Effectiveness of anonymization in double-blind review” in Communications of the Association for Computing Machinery, found that 70 to 86 per cent of reviews were submitted without guesses, suggesting that reviewers did not believe they knew who wrote the papers.
Of those who did take a guess, one conference group was correct 75 per cent of the time, another 50 per cent of the time, and the third 44 per cent of the time.
Self-reported experts in the particular field that they were reviewing were significantly more likely to be able to guess the paper’s author, but overall figures showed that 74 to 90 per cent of reviewers were not able to make any correct guesses. “While anonymisation is imperfect, it is fairly effective,” the authors conclude.
Claire Le Goues, an assistant professor in computer science at Carnegie Mellon University and co-author of the study, told Times Higher Education that the motivation for the research came in response to a common objection to double-blind review that reviewers can often guess author identities.
The group’s evidence suggests this to be false, she said, and furthermore, one possible reason for reviewers being able to guess correctly could be “poor anonymisation”, she explained, since the data found poorly anonymised papers to have a higher concentration of correct guesses.
“Double-blind review is still relatively new in our community and has not yet been implemented in all venues,” she noted. “As a result, many authors aren’t accustomed to writing anonymised papers, and our conference management systems aren’t universally well-equipped to support it. With time, these issues should be resolved.”
A previous experiment into the effectiveness of single versus double blind review published in PNAS found that reviewers with author information were 1.76 times more likely to recommend acceptance of papers from famous authors, and 1.67 times more likely to recommend acceptance of papers from top institutions.
After completing the latest experiment, the programme committee chairs of all three conferences said they felt that double-blind review “mitigated effects of subconscious bias”, the report said, “which is the primary goal of using double-blind review”.