Too complex for the jury?

Some think that traditional peer review, the guardian of sound science, is not up to the task of assessing large-scale multidisciplinary research. Paul Jump puts the question to the experts

December 2, 2010
Science research
Source: iStock

When an interdisciplinary paper was published in the journal Science in October last year purporting to describe a way to identify all enzyme activity in a cell, critics were quick to cry foul.

 

Chemists in particular raced to the internet in numbers to point out the mistakes, with some even asserting that the technique, known as a reactome array, was impossible. The furore provoked Science to raise its own concerns. A subsequent investigation by the ethics committee of the Spanish National Research Council, which funds the institute employing one of the paper's corresponding authors, Manuel Ferrer, found "clear indications of deviation from good scientific practices". It recommended that the paper be retracted: last month, it was officially withdrawn.

The research involved 18 collaborators from eight different institutions in four different countries - and spanned two disciplines, chemistry and biology. It was peer reviewed according to Science's normal procedures (although the oversight team did not include a chemist). But is the standard peer-review process adequate for multidisciplinary papers?

By its very nature, such work requires a broader range of reviewers than single-discipline papers. Is there a danger of its being misunderstood by reviewers who have to grapple with data and interpretations outside their own fields of expertise?

Pere Puigdomènech, the leader of the Spanish National Research Council's ethics committee, welcomes interdisciplinary approaches in science, describing them as a "useful way to make new discoveries". But he believes the reactome incident raises concerns about journals' procedures for peer reviewing work that spans several disciplines.

An organic chemist, he says, would "easily" have picked up Ferrer's mistakes, "but we understand that it is difficult to appoint referees in all the disciplines involved when reviewing a complex, multidisciplinary article".

Monica Bradford, executive editor of Science, declined to give further details of the review process that the reactome paper underwent, but she agrees that papers are becoming more interdisciplinary and are incorporating more data as a result.

"Even a diverse group of reviewers may miss what is obvious to the specialist. It is clear that increased vigilance is required in such cases, but no realistic set of procedures can be completely immune from error," she says.

Timothy Mitchison, Hasib Sabbagh professor of systems biology at Harvard University, was one of the first to raise concerns about the reactome paper. Describing its passage through peer review as "remarkable", he says he "seriously wondered if it was some kind of April Fool's paper".

He does not, however, believe that multidisciplinary research poses a particular challenge to peer review, except in terms of the large amount of data it can generate.

"My sense is that, in general, biologists, chemists and physicists can communicate just fine," he says. "There is an issue with any paper that generates so much data that it requires expert analysts - people who crunch numbers and do statistics but do not necessarily understand all the (researchers') techniques or questions."

Speaking to Times Higher Education after the ethics committee delivered its verdict, Ferrer, a researcher in the department of biocatalysis at the Institute of Catalysis and Petrochemistry in Madrid, admits that the reactome project had produced such a huge amount of both chemical and biological data that he had struggled to handle it by himself, even though he has expertise in both fields.

He adds that his focus in writing the paper was on the biological aspects of the research because his priority was to show the potential application of the technology in the field.

Sir Richard Roberts, chief scientific officer at New England Biolabs in Ipswich, Massachusetts and joint winner of the 1993 Nobel Prize in Physiology or Medicine, believes that this was the correct approach.

He thinks that Ferrer's reactome array works and that the scientist has been badly treated by Science, pointing out that there is not enough space in the journal's article format to include all the technical details demanded by the paper's critics.

"Those details could certainly have gone as supplementary material, but that was not requested until after publication," he says. "Besides, it is getting to be ridiculous in many cases because authors are sending in reams of supplementary material to the point where it cannot even be reviewed properly. No one has the time."

Maxine Clarke, publishing executive editor of Nature, concurs. There is a limit to the amount of information that can be included in a discrete paper if it is to remain readable and reach a clear conclusion, she says.

Given that, peer review can never hope to pick up on every single error.

Since top journals such as Nature and Science strive to publish papers whose conclusions are of interest beyond their immediate fields, multidisciplinary papers represent potentially rich pickings. But while referees may do a disservice to a journal by allowing their excitement over a paper's novelty to distract them from its technical failings, they may do an equal disservice to authors by focusing too narrowly on the elements in which they are expert, failing to consider the broader significance.

Cameron Neylon, a senior scientist at the Science and Technology Facilities Council's Rutherford Appleton Laboratory in Harwell, Oxfordshire, recalls the problems he had in placing one of his papers that described a new method of measuring the melting temperature of DNA based on a physical model.

"You could see how the method could be expanded to other things," he says, "but the DNA people weren't excited because it was no better than the existing method and the physics people weren't interested because the way we were using the model was not staggeringly new."

Nor would a higher number of reviewers necessarily help in such a case because, according to Neylon, there may be only two or three people in the world with broad enough expertise to see the full significance of an interdisciplinary paper.

But might more reviewers help assess a paper's rigour and avoid another reactome situation? Can the traditional three-strong peer-review team really possess all the expertise necessary to examine every aspect of a multidisciplinary paper?

Although the lack of a chemist among Science's reactome reviewers may seem glaring, Clarke insists that Nature editors already use "as many referees as it takes to properly assess a paper", while Stuart Taylor, head of publishing at the Royal Society, says his titles regularly use up to seven reviewers if a paper warrants it.

The UK research councils state that they, too, routinely recruit extra reviewers when a grant application straddles different councils' remits.

But although there is no upper limit on the number of reviewers, a spokeswoman for Research Councils UK says its members "recognise the need not to overburden the peer-review system".

This point is echoed by Sir Mark Walport, director of the Wellcome Trust, the biomedical funder. He says the global rise in the number of papers being published has led to top scientists being "overloaded" by requests to carry out peer review.

"It is important that the best people do the reviewing because the quality of the decision is only as good as the people who make it," he says. "But you have to strike a balance between managing the workload and having enough reviewers to make a valid decision."

Clarke says that Nature generally calls on referees only once a year - even though scientists are usually happy to oblige "because they are likely to receive an interesting manuscript".

Are there ways to get more out of peer review without placing additional strain on the system? Diana Garnham, chief executive of the Science Council, the umbrella body for subject-specific bodies such as the Institute of Physics, says journals perhaps could learn lessons from grant-awarding bodies, which not only commission written reviews of applications, but also assemble expert committees to discuss them before reaching their final decisions.

She suggests that if journals were to bring peer reviewers together to discuss manuscripts, particularly those of multidisciplinary papers, they might be able to come to a better-informed collegiate view of their rigour and significance.

"It is common to have eight or 10 reviewers on committees for charity grants," she says. "If a reviewer comments outside his area of expertise, the others know. In journals, the burden falls on editors to adjudicate on that."

Liz Philpots, head of research practice at the Association of Medical Research Charities, regards dialogue between reviewers as vital because it allows them to raise concerns that others in the group may be able to allay, or that may serve to bring the panel collectively to identify flaws in applications.

"For a journal, a teleconference of referees or an editorial panel would be able to reach an opinion based on all the reviews received and come up with a synthesis of comments that authors would need to address," she says.

Although Walport agrees that, where possible, "several minds are probably better than one", he adds: "The corollary of that is you can end up making decisions on the basis of the least imaginative member of the group."

According to Clarke, Nature journals are looking at providing, on a pilot basis, just such a "confidential space" for their referees to discuss manuscripts in real time. But she points out that reviewers already see each other's comments as well as authors' responses after each round of reviewing, while editors regularly discuss manuscripts with "one or more" of the referees via telephone or email. Most Nature editors think this is sufficient scrutiny.

Clarke believes that the pressure on journals to make quick decisions about publication also militates against calls to expand the peer-review process further - as does feedback suggesting that some reviewers would not welcome the additional burden.

Roberts is certainly among those who have enough on their plate: "I can't imagine reviewers spending the time to do the review and then giving up more time to discuss it. Most of us are already far too short on time anyway."

Another possible way to increase the number of reviewers without crushing individual researchers under the burden is so-called open peer review, in which papers are posted online for anyone to rate. The pioneer of this approach is the Public Library of Science (PLoS), a non-profit publisher. All its journals are open access and have a facility for readers to rate and comment on articles.

The idea is pushed furthest in PLoS One. Its pre-publication peer review checks papers only for technical failings; significance is left entirely to online users to determine.

Mark Patterson, director of publishing at PLoS, says the ultimate goal is to provide "metrics and indicators at the article level that will allow users to judge articles on their own merits, rather than on the basis of the journal in which they happen to be published". Such a mechanism, he believes, would add as much value to single-disciplinary papers as it would to multidisciplinary ones.

Although he admits that only a minority of PLoS articles have attracted comments so far, he is convinced that activity will increase when the mechanisms are perfected.

Nature launched a similar service earlier this year. Clarke says: "Although we have had a few interesting comments, it isn't something that scientists seem to want to do unprompted."

Scientists' willingness to get involved in online peer review may be boosted by the facility envisaged in the most radical version of open review, which would enable users to rate reviewers' comments, thereby allowing the most prolific and incisive reviewers to build and demonstrate their prestige.

One supporter of this system is Pandelis Perakakis, postdoctoral researcher in the department of psychology at the University of Granada, Spain. He advocates a system of "natural selection" for academic papers based on the reader comments they elicit.

He thinks the argument for open peer review is particularly strong in the case of multidisciplinary papers because of their inherent need for more reviewers.

Open review would also provide a strong incentive for authors to solicit reviews from researchers in different scientific areas to help boost their papers' credibility, he adds.

"We can also imagine the reviewers or readers themselves suggesting that the authors invite experts from other fields to evaluate points and clarify issues that they are not familiar with," he says.

The identification of failings would not result in a paper's retraction; the work would remain available but its problems would be clearly highlighted and scientists would treat it with appropriate caution.

But some are sceptical about open peer review. Puigdomènech sees little value in a system that requires scientists to wade through a potentially large number of comments of variable quality before deciding how much credence to give a paper.

"In my opinion, there is no alternative to (traditional) peer review," he says. "Non-refereed open articles would lead to an impossible amount of non-verified information available to scientists who already have difficulties keeping up with the pace of new results in their own field."

For him, the reactome-array case demonstrates nothing more than the need for journals to exercise more care and attention when choosing referees - particularly for multidisciplinary papers. As a partial solution to the strain on the system, he suggests drafting in more young scientists, who may have more time on their hands and more open minds than their older colleagues - even if their expertise is narrower.

Ben Davis, professor of chemistry and Fellow and tutor in organic chemistry at Pembroke College, Oxford, was one of the critical voices who questioned the reactome-array paper. Although he presented his concerns in an internet forum, he insists that online mechanisms are not an alternative to peer review, claiming that the most celebrated example of an online information source policed by peers, Wikipedia, contains lots of mistakes.

He also objects to the tendency for open peer review to turn an assessment of scientific merit into an inappropriately democratic judgement or a verdict decided by "who can shout the loudest".

It shouldn't be about who has more time to post or who is more articulate, says Davis. "We should not react to mistakes with traditional peer review by throwing out the baby with the bathwater."

Nor, he adds, are time pressures on referees an excuse for cutting corners on peer review. The most important thing, he believes, is for everyone involved - editors, reviewers, authors - to act responsibly and to seek help where their expertise is lacking.

Clarke says that editors handling a Nature manuscript will always consult others "where a paper bridges disciplines or uses a range of techniques or analysis", while Taylor insists that Royal Society journals already encourage reviewers to suggest colleagues if they feel a paper is outside their expertise. For his part, Davis would be "surprised" if referees did not do so - although he is also confident that there is a sufficient number of individuals in modern science with the breadth of expertise necessary to review multidisciplinary papers.

By common consent, the biggest responsibility for ensuring a paper's accuracy remains with its authors. But Puigdomènech and Roberts both point out that in multidisciplinary, multi-author work, many authors may well lack complete knowledge or even full comprehension of the results in toto and are therefore reliant on the integrity and expertise of their colleagues when signing off papers.

"The usual approach to dealing with that has been to know your collaborators well enough to establish a bond of trust," says Roberts. "If I thought that I had to understand my collaborators' work well enough to spot the defects that might accrue through fraud, I would never get any work done."

Davis says that corresponding authors - the lead researchers who write the papers - should not be too proud to draft in others "to act as additional sources of confidence" when necessary. Nor should they hesitate to retract a paper if mistakes come to light after publication - and others in the scientific community should respond by congratulating them on their integrity rather than scorning them for their sloppiness.

He regrets that at the moment, the negative publicity of a retraction and the potentially detrimental effect on scientists' careers means that authors often adopt a bunker mentality when doubts arise. To his mind, the accuracy of the scientific record should be every researcher's overriding concern.

"The individuals aren't important," he says. "We will all be dead in 100 years, but hopefully people will still look at our science. I would hate to die having left massive clangers that would waste hundreds of people's time."

A LITTLE BIT OF EVERYTHING: The Rise of Multidisciplinary Work

Interdisciplinary and multidisciplinary approaches to scientific problems are as old as the discipline categories they transcend.

A classic example is the application of X-ray diffraction, a process developed in physics, to investigations of the structure of DNA, which was the hottest issue in molecular biology in the 1950s.

Over time, as novel approaches become commonly adopted, they acquire their own names - such as biophysics - and become disciplines in their own right.

"As science evolves, the disciplinary boundaries constantly grow and disappear," says Ben Davis, professor of chemistry and Fellow and tutor in organic chemistry at Pembroke College, Oxford. "I work between chemistry and biology, but I don't see it as being across a divide: it is more about where we are interested in doing things."

Still, it is fair to say that the past decade has seen considerable growth in the demand for explicitly cross-disciplinary research both from journals and funders. Earlier this year, for instance, Nature Publishing Group introduced Nature Communications, its first completely multidisciplinary journal since Nature made its debut in 1869. At Royal Society Publishing, the number of multidisciplinary papers printed has risen fivefold since 2004, when it launched its dedicated journal, Interface.

Funders are just as eager to foster multidisciplinary approaches.

In the US, the National Institutes of Health (NIH) has made an effort over the past decade to remove barriers to such work by, for example, permitting more than one lead researcher to be named in grant applications, thus allowing them to share the prestige and benefits, such as opportunities for promotion.

According to Patricia Grady, co-chair of the NIH's Interdisciplinary Research Working Group, as the statistical and technological complexity of science continues to expand, researchers increasingly will be obliged to pool their brainpower by forming larger teams.

Once these units become established, she adds, multidisciplinary work acquires its own momentum.

In the UK, the research councils have always supported multidisciplinary work, but in recent years they have strived to simplify their mechanisms.

Many of the cross-council funding priorities adopted in 2007 also lend themselves to multidisciplinary approaches - although this was not the primary reason for their adoption, says a spokeswoman for Research Councils UK.

The rationale for these priorities is more that the challenges facing society, such as ageing, food security and living with environmental change, are interdisciplinary ones rather than deriving from a push on RCUK's part to fund interdisciplinary research, she says.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored