At least half of papers in economics are not reproducible, a new analysis has found, suggesting that the “reproducibility crisis” in academia is not confined to lab sciences.
Researchers from the United States Federal Reserve and the Department of the Treasury tried to replicate the results from 67 papers across 13 prestigious journals, but even after contacting authors when necessary, they were successful in only 49 per cent of cases where the data were not confidential and the researchers had the right software to analyse it.
“We assert that economics research is usually not replicable,” the paper concludes.
The findings feed into broader concerns that academics are engaging in statistical sleights of hand, not being open with data and failing to control their own biases in order to get career-boosting positive results in top journals.
Last week, two studies reported that scientists doing experiments with animals were often failing to use well-known techniques – for example blinding themselves to which animals were receiving what drug – to mitigate their biases, and therefore potentially exaggerating the impact of new treatments.
But this new study highlights that the problem of reproducibility might not be confined to the lab.
According to the paper, the main reason for being unable to replicate findings was an inability to find the right data or the computer code that produced the original results, even after contacting the authors. Code was missing crucial functions, or certain variables were absent from the data, the paper says.
However, in nine cases for which the authors of the paper had the right dataset and code, they nonetheless got a different result or the code failed to finish executing, according to “Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say ‘Usually Not’”.
Sarah Necker, a research associate at the Walter Eucken Institute in Freiburg whose previous research has revealed suspect research practices among economists, said that it was unclear whether the lack of reproducibility was down to the deliberate “hacking” of statistics to get a positive result or to honest errors.
But economics might be particularly open to statistical manipulation compared with other disciplines, Dr Necker explained. For example, an economist could scrutinise data to uncover what causes economic growth and find that it correlates to a particular variable, such as the level of innovation in a society.
This correlation might disappear if a different variable is controlled for, but given that the researcher decides what to control, this affords a great deal of scope for manipulating results, she argued.
Last year, a survey of economists by Dr Necker revealed that more than a third admitted to “searching for control variables until you get the desired results”, even though 85 per cent considered this practice to be unjustifiable.
She added: “Economic methods have become pretty complicated. The datasets are huge. In economics it has become popular to have some fancy [mathematical] method, but that makes it more difficult to replicate.”
Some journals now ask for data to be made available alongside articles, Dr Necker explained, and an online “Replication Network” encourages the practice among economists.
But the incentives for economists to provide open data so that their results can be replicated were still “low”, she said. It was “time-consuming” to provide clean data for public scrutiny, she said, and it increased the risk that errors would be discovered.