When it comes to misconduct in research, can science ever be self-policing or does the sector need to rethink its strategy? Nicholas Steneck questions.
During the second world war and continuing through the discovery of the structure of DNA, the conquest of polio, the space race and the cold war, the term "science" was commonly assumed to be synonymous with truth and progress. Consequently, reports of misconduct in scientific research in the late 1970s touched a sensitive nerve. Scientists do not lie.
Yet, doubts about the integrity of scientific research were not new in the 1970s. Science and technology had been linked to environmental pollution and unpopular military technologies during the Vietnam war era. To this was added the inability of scientists to solve the worldwide energy crisis and, in the United States, concern about abuses in human experimentation (the Tuskegee syphilis study and radiation-dose studies). However, if scientists faked results, manufactured data and stole publications from colleagues, what was left of the assumptions about truth and progress?
In response to concerns about misconduct, scientists took comfort in the fact that it seemingly occurred infrequently and was detected. Peer review and the confirmation of results through replication, it was argued, ensured that science could police itself. Colleagues discovered William Summerlin's deceptive skin grafts, John Daresee's manufactured evidence and Robert Slutsky's 60 plagiarised publications. In addition, the National Science Foundation, the National Institutes of Health and major research universities established rules for investigating misconduct, following directives from the US Congress. If one adds to these initiatives the recently adopted, government-wide definition of misconduct formulated by the Office of Science and Technology Policy, the problem of misconduct in scientific research would appear to be solved.
Before heading back to the laboratory, however, two shortcomings of this solution deserve attention. First, mounting evidence suggests that research misconduct may be more common than estimated and that the self-policing mechanisms relied on to correct the research record have shortcomings. In surveys of research behaviour, more than one in ten respondents consistently reports knowing of misconduct in research. Errors make their way into scientific literature and are not set straight through replication, corrections or retractions. One should not jump to the conclusion that the level of misconduct is out of control. Nonetheless, researchers and research administrators should be aware that the level of misconduct in research in their areas of specialisation is for the most part unknown.
Second, the current solution to the "misconduct problem" favours the interests of researchers over those of the general public. Researchers have urged that the definition of misconduct be limited to three behaviours: falsification, fabrication and plagiarism. These are unacceptable to researchers because they undermine the reliability of the research record and rob them of credit for their work. Researchers have also been instrumental in limiting the definition of misconduct in research to intentional deception. The new OSTP definition stipulates that for misconduct to be confirmed, it must:
- Represent a significant departure from accepted practices of the relevant research community
- Be committed intentionally, knowingly or recklessly
- Be proven by a preponderance of evidence.
These measures are designed to protect the integrity of the scientific record without exposing researchers to the risk of being accused of misconduct for inadvertent errors or intellectual disagreements.
The problem with this approach is that it ignores a host of research behaviours that undermines the integrity of research and that has consequences for the public. These behaviours, which were recognised but left unaddressed in the 1992 National Academy of Science report, Responsible Science , include:
- Conflict of interest
- Bias in peer review
- Duplicate and wasteful publication
- Inappropriate authorship attributions
- Inappropriate use of statistics
- Failure to cooperate with colleagues
- Poor mentoring practices
- Sloppiness and inattention to detail.
These behaviours waste public funds invested in research. More important, publications that use faulty statistical analyses, reviews that reflect unacceptable bias or decisions based on evidence distorted by unrecognised duplicate publication can have serious implications for public health and safety.
Efforts are under way to gather more information about the level of misconduct in research and to assess the level of integrity. Researchers are also gathering evidence that will help to explain what encourages researchers to set high standards for integrity or to push the bounds of acceptable behaviour to the point of misconduct. Particular attention is being paid to the relationship between research integrity and research climates, including factors such as the commercialisation of research and the demand for greater productivity.
The findings could pose a significant problem for research institutions. Studies suggest that environmental factors can badly influence researchers' attitudes toward integrity. If these studies are confirmed, serious consideration may have to be given to the way research is funded and the pressures put on researchers to bring in more funds and publish more papers. The old values underlying public support for research, truth and progress are not either/or propositions. Progress cannot be purchased at the expense of truth. Research that lacks truth and integrity has no value for society and will no longer merit public support, which is vital to the maintenance of strong research programmes.
Nicholas Steneck is professor of history and ethics at the University of Michigan, United States.