What is the most important academic discipline in the world right now? You could make a strong argument that it is economics. Clashes in the UK between Cameron and Corbyn over austerity – mirrored in disputes across the world – largely boil down to what you think will happen to the economy if the state borrows and spends a bit more than it currently does.
In such a complex world, no one can answer this question with absolute certainty, so our view on this largest of political questions comes down to any fuzzy assumptions we hold about how the economy works, even if we aren’t quite sure where we picked them up. As John Maynard Keynes wrote, “practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist”.
This makes it particularly concerning that researchers working for the United States Federal Reserve and the US Treasury have found that more than half of economics papers aren’t replicable. At first sight, this seems extraordinary. These aren’t scientists repeating an experiment. They are economists simply trying to get the same results by re-running a calculation on the original data. Surely this is like a calculator giving two different answers to the same sum?
It’s not quite this simple. The main reason for a lack of reproducibility is that the original data simply weren’t all there (although this says something about how open economists are with their stats). Yet in some cases even when all the original ingredients were there, the researchers still got a different result, or failed to get one at all.
But there is an even more fundamental problem here, as Sarah Necker, a research associate at the Walter Eucken Institute in Freiburg, explained to me. An economics paper could be perfectly reproducible, and yet still be utterly flawed. This is because economists have great freedom to pick and choose their statistical methods and what variables to control for, which they can “hack” until they get an exciting or politically useful result.
This issue blew up in 2013, when Harvard economists Carmen Reinhart and Kenneth Rogoff were accused of statistical cherry-picking. A paper of theirs had concluded that countries with a debt-to-GDP ratio of more than 90 per cent tended to have contracting economies, and was cited by some as a justification for austerity.
Reinhart and Rogoff admitted an “accidental omission” that led to some countries being excluded from their analysis. But they denied deliberately skewing the data to get the desired result, and argued that their statistical methods were sound (they had been accused of “unconventional weighting” of statistics).
Another spat erupted the following year. Chris Giles, economics editor at the Financial Times, accused the economist Thomas Piketty, author of the vast Capital in the 21st Century, of using dodgy data to conclude that the UK had become more unequal since the 1980s, something Piketty denied.
Who is right in these two cases is moot. The problem for economics is that there is ample scope for certain statistical treatments to be applied to get the desired result. Only the authors will ever know if they have been truly objective in their methods, says Necker (and, given that bias can be unconscious, even they may be in the dark).
Another problem, points out Necker, is that perhaps more than other discipline economics deals in correlation, not causation. Reinhart and Rogoff’s findings, as they themselves pointed out, only found a correlation between debt and low growth, and thought the causation flowed both ways (although they still argue that “growth is an elusive goal at times of high debt”).
In the week after the Nobel Prize for economics was awarded, all this leaves the discipline open to the familiar charge that it is not remotely scientific.