Across Whitehall, the randomisers are on a roll. As government departments tackle the challenge of strengthening the evidence base for policy, the use of scientific methods such as randomised controlled trials (RCTs) is increasingly seen as the best way of testing whether particular interventions work. Implementation remains patchy but RCTs now have an internal government champion in the Cabinet Office's Behavioural Insights Team (or "Nudge Unit"), which in June 2012 published a paper called Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, co-authored by Ben Goldacre, the medic and science journalist. And RCT advocates received another fillip from the news that Barack Obama's electoral strategists used this approach to test political messaging in marginal states during his 2012 presidential re-election campaign. Or, as Goldacre put it on his blog: "Obama's team used RCTs to help get votes! Awesome sauce."
Against this backdrop, a "practical guide" to doing evidence-based policy is well timed. Nancy Cartwright and Jeremy Hardie make their stance explicit from the first page: "You are told: use policies that work. And you are told: RCTs - randomized controlled trials - will show you what these are. That's not so. RCTs are great, but they do not do that for you. They cannot alone support the expectation that a policy will work for you." Their focus is on how policymakers can move beyond the starting point for much evidence-based policy - RCTs or other studies that show that a particular policy works somewhere - to answer the "one central question that should always be on the table in policy deliberation: Will it work here?" The crucial distinction to be drawn here is between evidence for efficacy and evidence for effectiveness.
To answer the question of effectiveness, Cartwright and Hardie start with a forensic, if at times dry, analysis of theories of cause and effect. Two important factors matter in determining whether a policy will work. First, could the policy play a role in producing the desired outcome in your particular setting? "Smoking can play a causal role in producing lung cancer; owning an ashtray can't." Second, what other support factors must be in place for the policy to work? These can be identified through a process of "horizontal search", akin to listing all the possible ingredients of a cake, followed by a "vertical search" to identify features at the right level of abstraction to play a positive causal role.
Cartwright and Hardie pepper their argument with examples drawn from a wide variety of policy settings: criminal justice, international development, drugs policy and child welfare. They also revisit the case in support of RCTs, acknowledging that: "What is special about RCTs is that they are self-validating...[They] can produce highly trustworthy causal claims." Another of their attractive features is that they can prove what caused a particular outcome without understanding how. "You put the drug in, and out comes a cure. But you get no idea from the RCT how that happened." But for Cartwright and Hardie, it is only through answering the "how" question that you identify the key facts that have to be true for a particular intervention to work in your situation.
So while an RCT is often a good starting point, it is only a start: "one stone on the long, and often tortuous, road to 'it will work here'". And relying too heavily on RCTs can at times discourage decision-makers from exercising their own discretion and judgement. Unashamedly complex, but refreshing and insightful, this book should be read by all those who inhabit the boundaries between policy, evidence and uncertainty.
Evidence-Based Policy: A Practical Guide to Doing It Better
By Nancy Cartwright and Jeremy Hardie
Oxford University Press 224pp, £45.00 and £11.99
ISBN 9780199841608 and 1622
Published September 2012