Peer review as a way of validating research is bunk

The current review process has many holes, says Apostolos Koutropoulos

August 28, 2015

It seems like forever ago that a friend sent me a link to a Times Higher Education article where academics outlined the worst piece of peer review they had ever received.

As I was reading it, my own thoughts about peer review surfaced anew.

I’ve done peer review for articles, and when I am not happy (well, “convinced” would be a better word) is when there are methodological issues, or logical fallacies, or the author hasn’t done a good enough review of the literature. In my role as a peer reviewer, or even a journal editor, my main goal isn’t to dis someone’s work. My goal is geared more towards understanding. 

For instance, if an article I review has logical fallacies in it, or is hard to follow, then what hope is there for the broader journal audience? I see the role of the editor and the reviewer NOT as gatekeeper but as a counsellor. Someone who can help you get better “performance” (for lack of a better word). I’ve put my thoughts into several categories.

Peer review as quality assurance
This concept to me  is complete bunk. It assumes, to some extent, that all knowledge is known and therefore you can have reasonable quality assurance. What we “know” to be “true” today may be invalidated in the future by other researchers. Peer review is about due diligence, and making sure that the logic followed in the article is sound.

Peer reviewers are experts
I guess this depends on how you define expertise. These days I am asked to peer review papers on Moocs (massive open online courses) because I am an expert. However, I feel a bit like a fraud at times. Because I’ve been working on various projects and have been pursuing my doctorate, my extracurricular reading on Moocs has drastically declined. I have read a lot on Moocs, but I still have a a drawer full of research articles that I have only read the abstracts of.

The question that I have then is this: how current should an expert be? Does the expert need to be at the leading edge of research or can he lag behind by 18 months?

Validity of peer review
Peer review is seen as a way of validating research. I think that this, too, is bunk. Unless I am working with the team that did the research, or try to replicate it, I cant validate it.

The best I can do is to ask questions and try to get clarifications. Most articles are 6,000-9,000 words. That is often a very small window through which we look to see what people have discovered. This encompasses not only the literature review, and the methods, but also the findings and the further research section. That’s a lot! 

I also think that the peer reviewer’s axiology plays a crucial role in whether your research is viewed as valid or not. For example, if your sources are not peer-reviewed articles but rather researched blog posts from experts in the field, all that some peer reviewers will see is blog posts, and those may be of no value to them. Conversely, if the work cited is in a peer-reviewed journal, we can be more lazy and assume that the work passes muster.

Anonymous peer review
I think anonymity is an issue. Peer review should never be anonymous. 

I don’t think that we can ever reach a point of impartial objectivity, and as such we can never be non-biased. I think that we need to be aware of own our biases and work towards having them not influence our decisions. I also think that anonymous peer review, instead of encouraging open discussion, creates a wall behind which potentially bad reviewers can hide. It’s the job of editors to weed them out.

Peer review systems suck
This was something that was brought up in the THE article as well. My dream peer review system would provide me with something like a Google Docs interface where I could easily go and highlight areas, add commentary in the margins, and provide people with additional readings that could help them. 

The way systems work now, while I can upload some documents, I can’t easily work in an word processor to add comments. What I often get are PDFs, and those aren’t easy to annotate. Even if I annotate them, extracting those comments is a pain for the authors. The systems seem built for an approve/deny framework, and not for a mentoring and review framework.

Time to publication is insane
I hate to bring this up, but I have to. In my own ideal world, I would accept an article for review, have people review it, and if it passes muster (either right away or eventually) it would go up on a website ready to be viewed by the readers. 

The reality is that articles come in, and I get to them when I have free time. Getting peer reviewers is also time consuming because not everyone responds right away, so there is some lag there. If there are enough article candidates for an issue of the journal, I get to these sooner. If there are only one or two submissions, I get to them later. 

I would love to be able to get to them right away, but the semiotics of academic journals mean that a certain number of articles need to be included in every issue. It would feel odd to put out an issue one or two articles at a time.

So I, and other researchers, will work hard to put together something, only to have it waiting in a review queue for months. It’s a balancing of duties. I do the journal editing on top of the job that pays the bills, so journal editing is not my priority at the moment. I also want to work on my own exploration of ideas, so that also eats up my time.

I would hazard a guess that other journal editors, who do editing for free, also have similar issues. So, do we opt for paid editors or do we re-envision what it means to research and publish academic pieces? I’ll end this post here and ask you: what are your thoughts on this process?  How can we fix it?

Apostolos Koutropoulos is an instructor of instructional design at the  University of Massachusetts Boston. This is an edited version of a post that appeared on his own blog.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Reader's comments (2)

In my opinion, one important thing is missing: we (also) need peer reviews out in the open. With the prevalent practice of keeping peer review behind closed doors we effectively throw away important information. Every reader starts from scratch in their own assessment of a given paper and cannot see what others have said about it before. There are overlay solutions such as, but these are for post-publication review. We also need to be able to see what was originally said about the paper as part of the decision to publish it in the first place.
Despite its flaws, the scholarly community relies heavily on the peer review system because it is considered as the only possible way of weeding out bad science. Peer reviewers are just “peers,” so I agree that there is a danger in calling them “experts” because it hints at over-reliance on this system. Moreover, if it is a myth that reviewers are experts, it calls into question the increasing instances of reviewers calling for additional experiments and information. I wonder if dedicated training processes that qualify researchers as peer reviewers would improve the understanding of who exactly a peer reviewer is and what his/her role is in scholarly publishing.