‘TripAdvisor for peer review’ targets publishing bias

Text scans by artificial intelligence will flag inconsistent or unusual patterns in reviewer behaviour, says Wolverhampton professor

January 2, 2020
TripAdvisor owl mascot
Source: Reuters

The scornful comments of “reviewer 2” have become a running joke in academia. But a new artificial intelligence system – dubbed a “TripAdvisor for peer review” – may soon be able to test whether the scathing remarks of anonymous referees are being handed out fairly or not.

Amid concerns that peer reviewers are harsher in their criticisms of female researchers or those from less prestigious institutions, PeerJudge scans peer review reports for keywords – either positive or negative – to see whether reviewers are unduly tough when assessing certain types of researchers.

The program, which was created by technology company F1000 and researchers at the University of Wolverhampton, also checks whether reviewer comments correspond with the final recommendation to accept or reject the paper – an area of grievance for researchers when broadly positive comments on their manuscripts are followed by a call to reject.

PeerJudge – which uses text scanning algorithms similar to those used by travel site TripAdvisor – may help to address concerns that reviewers are more positive about authors based in their own country - a phenomenon confirmed by a paper published in November.

Its author, Mike Thelwall, professor of data science at Wolverhampton, who is one of PeerJudge’s creators, told Times Higher Education that the “slight tendency” for peer reviewers to favour those based in their own country could be explained by the fact that they are “more likely to meet them” and “are perhaps more cautious in their comments”.

More widely, PeerJudge could be used by publishers and journal editors to check whether their reviewers were exhibiting any signs of bias against certain groups, said Professor Thelwall.

“I do worry that we have systematic biases in peer review that go unchecked,” he said, citing studies showing that reviewers were more critical of authors from poorer countries or less prestigious institutions.

“I am sure editors read the reports from reviewers, but I hope this system will give them a better idea of who or what they are accepting or rejecting,” said Professor Thelwall, who said that PeerJudge would help publishers improve their quality control, as well as highlight “fake journals” where peer review processes are often shoddy, opaque or non-existent.

“Peer review is deemed the gold standard of academic research and we are all judged by it, so we should be doing it well, but I am not sure we always are,” he said.

The research – which used the full open peer review platform F1000Research, in which the identities of all authors and reviewers are publicly available, as are manuscripts, reports and comments – also examined whether reviewers altered their comments if they could read the reports of other reviewers before submitting their comments.

There was, however, “little evidence” to suggest that they were influenced by other reviewers’ judgements when this type of open peer review was used, researchers concluded.



Print headline: ‘TripAdvisor for peer review’ aims to fight publishing bias

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Peer review is lauded in principle as the guarantor of quality in academic publishing and grant distribution. But its practice is often loathed by those on the receiving end. Here, seven academics offer their tips on good refereeing, and reflect on how it may change in the years to come

6 December