Source: James Fryer
Even if a trial did suggest one formation was better, its recommendation would apply only in very few circumstances and with a caravan of caveats
A new phrase has entered the language: “the evidence shows”. It replaces “research has shown”, which sounds dreadfully passé, perhaps because we have all become wary - if not actually tired - of the claims of research. Research, it is now commonly realised, can show more or less whatever its originators want it to show. And we have come to realise, too, that its claims rarely seem to hold fast - if it’s not telling us the bleedin’ obvious, its findings seem to hold only in limited circumstances.
But there is an undying desire for certainty. It has an almost religious quality. Just as the religious zealot will quote the holy book, so the epistemic zealot will invoke evidence: “The evidence shows…” (Ah! Evidence shows! I’ll shut up then.)
The trouble is, evidence is everywhere, and what people mean when they say “evidence shows” is rather less compelling. What they really mean is: “My evidence is better than your evidence, and my evidence shows…” To prove it, they emerge with a league table of evidence quality, with one particular kind consistently in top place: evidence from randomised controlled trials. Those who sing the marvels of RCTs claim that these wondrous products of the methodologist’s science provide the “gold standard” in evidence quality. And they proclaim its unique benefits not just for the life sciences, but for life, the universe and everything.
On the recent death of an eminent medical statistician, BBC Radio 4’s Today programme asked Sir Richard Peto to discuss his work. Peto noted his pioneering use of RCTs, explaining that they compare the progress of a treatment group with a control group. It was the randomisation, he said, that was important: “Half get, just on the flip of a coin…the standard treatment, half get the new treatment; and quite often you find the new treatment isn’t any better, sometimes it’s worse.”
So far so good. But then Peto did what RCTs’ proponents often do. Like out-of-control balloons, otherwise sane and rational people zoom off at odd angles into extraordinary hyperbole about their truth-delivering qualities.
“Sixty years ago before trials, doctors often had no way of knowing what worked and what didn’t - and it’s beautiful now…the evidence,” he said.
As if evidence had not existed before 1950! Remember when John Locke quoted from Cicero: “What is so rash and so unworthy of the dignity and firmness of the wise as either to believe falsely or to maintain without any doubt what is perceived and conceived without enough investigation?”
Words to the wise, then: don’t be arrogant about your knowledge. “Do more investigation. Do enough investigation” should be the maxim, says Locke, not “Do it this way”.
The science writer Ben Goldacre recently suggested that we could find out all sorts of useful things from RCTs, and not just in medicine. So persuasive was he that the BBC gave him a slot in which he could inform social scientists, civil servants and politicians of the solution to their methodological conundrums: RCTs. Thank goodness: policy need no longer be a stab in the dark or, worse, politically motivated. “Do long prison sentences work?…randomise properly and run a trial,” he wrote. “Different teaching approaches?…Harder exams?…Job-seeking support? Run a trial.”
I usually agree with Goldacre, but I have to differ from him here. His faith in trials as a solution to the uncertainties of social policy is merely faith. The effect sizes garnered from some of the biggest trials - such as those with the multibillion-dollar Head Start and Follow Through education and social programmes in the US in the 1970s and 1980s - have been nugatory. Wonderful as RCTs may be in medicine, they can tell us much less about the potential of social interventions.
Albert Camus said: “All that I know most surely about morality and obligations I owe to football.” He might not have stopped there, for social interventions are much more like football than pharmaceuticals. Certainly, a well-conducted trial will tell you if tetracycline is the best way of treating tuberculosis. But your football manager won’t find out whether a 4-3-3 formation is better than 4-4-2 by commissioning a controlled trial, however big and well run. Why? Because conditions change, managers differ, people think and conspire, and local circumstances vary. Even if a trial did tell you that 4-3-3 seemed to be better most of the time, its recommendation would have to be restricted to a tight range of circumstances and there would need to be an interminably long caravan of caveats appended to any, inevitably fairly weak, conclusion that it was able to offer: less good against a strong defence; not if the wind is following; not if the opposing manager has drenched the field beforehand; not if Wayne Rooney is playing against you - or with you. Any power in the original recommendation will soon be snuffed out.
Sophisticated understanding about the admixture of these provisos, exceptions and permutations is summed up in the word “experience”. Recognition of the value of experience, intelligently used, is why Manchester United are unlikely to sack Sir Alex Ferguson and replace him with a committee such as the National Institute for Health and Clinical Excellence. And it’s why we shouldn’t replace trust in judges’, civil servants’ and teachers’ experience with a faith in trials. It wouldn’t be good science.