It is a fundamental precept of bioethics: first, do no harm. It proscribes careless treatment of research subjects – people or animals.
But careless treatment of numbers flies under the radar, even though its effects can be catastrophic.
Such “unexpected” ethical challenges are among the topics that Georgetown University biostatistician and cognitive scientist Rochelle Tractenberg is tackling. She said that, at its root, ethics means commitment to good science.
“It’s about how we have obligations to make meaningful contributions to the scientific record,” said Dr Tractenberg, who is chair of the American Statistical Association’s Committee on Professional Ethics. “If you publish whatever comes to mind or whatever you finish, someone may find that paper and – in the rush of clinical practice – use it without vetting it properly.
“They may make a decision based on your work, which wasn’t reproducible, and harm a patient.”
Dr Tractenberg said that in areas such as neuroscience, where experimentation involving humans and animals is commonplace, the ethical focus tended to be on “obvious priorities” like obtaining informed consent, avoiding disproportionate harm and balancing benefits against risks.
While violating such “norms” was clearly unethical – and in some cases illegal – it was equally unethical to perform bad or careless science that wasted resources, torpedoed trust and intensified a “credibility crisis” in biomedical research, she argued.
Ethical research necessitates respect for the scientific method, she said. This entails rigour, reproducibility and “positivism” – active efforts by researchers to disprove their own theories.
In a lecture at the University of Melbourne, Dr Tractenberg highlighted five unethical traps that could be avoided by a positivist approach to science. The first was a “mismatch” between the data and the conclusions when scientists collected statistics that were inappropriate for the research question.
Other pitfalls included inadequate sample sizes, results that could not be reproduced and random readings falsely portrayed as significant. A fifth hazard lay in results that could not be translated into real-world applications.
Dr Tractenberg said that avoiding these shortcomings was imperative for anybody who cared about people’s or animals’ lives or quality of life. “If you only have $50 [£39] to spend per person, so you can only enrol 100 people in your studies, you have a constraint.
“But you can’t design a clinical trial with too few people in it. Those people will be at risk for no reason, because the results will not be reproducible. You must come up with a design that’s more efficient, or get more money. In a cancer trial, if you can’t design a study that can be reusable and reproducible, somebody may die.”
She said similar considerations applied outside biomedicine. “If you can’t predict the winner of the next election, no one’s going to die. But there may be impacts on social fabric or distribution of resources. Those are harms, real harms. You have an obligation to minimise harms and risks in whatever way is appropriate for your field.”
Researchers also have an obligation not to waste resources, she said, adding that one of the most precious resources is the trust of the public – not to mention other scientists.
“You can squander that if you’re in a top-rated university and just publish whatever comes down the pike, and none of it is reproducible. You’ve harmed your institution’s reputation. People are starting to see that harm as real, and brand-damaging.”
Dr Tractenberg said there was a “huge problem” in the way statistics were used. She said researchers often tried to fit results into standard experimental design models, rather than considering whether the model or method of analysis suited the research question.
“Experts are biased toward what they expect – that’s the problem with experts – so when they’re repeating or reviewing work, they may not notice discrepancies,” she said.
Another problem occurred when statisticians wrote sections of research papers – such as the methodology or the results – but did not review the conclusions. This could be unethical, even if it was not deliberately deceitful. “The statistics may have been done correctly but not reported correctly,” she warned.
Dr Tractenberg said that some problems arose because statistics was taught by non-specialists. “Psychologists tell me they don’t want to know all the methods – they just want to know about aspects that relate specifically to psychology – so that’s why they teach the classes themselves. There are competing interests within the institutional context of a university,” she said.
“People are really resistant to the idea that they’re not ethical, or that they should be taking more time to make their students more ethical.”