Cases in need of evaluations

July 2, 1999

How many public health promotions actually achieve their aims?

Ann Oakley tells Geoff Watts the only way to find out is to submit health campaigns to the same controlled trials as the rest of medicine

In the early years of this century, social theorists with a policy change to peddle, especially in education, were often willing to prove their point by organising a controlled trial. Less commendably, they were equally willing to disown results that failed to back their beliefs. Some cheerfully renounced quantitative methods in favour of qualitative assessments that were more likely to yield the "right" conclusion.

Recounting this bit of the history of her own discipline with something akin to glee, Ann Oakley of the Social Science Research Unit at the Institute of Education makes a rhetorical comparison with medicine. "What would you think of a doctor who said, 'Well, I have done a randomised controlled trial, and the drug seems to be killing people. But I don't want that answer, so I'll just ask patients if they like taking it'?"

Oakley is an uncompromising advocate of the importance of evaluating social policy. She thinks the controlled trial is the best means of making these judgements, and she thinks social scientists should use it more. Her "awakening" to a methodology that is still unpopular with some researchers in her field dates back to her time at the National Perinatal Epidemiology Unit in Oxford. "It was in the 1980s, when I'd moved to the Institute of Education, that I began to think that if medicine needed to do this kind of evaluation, why not the social sciences?" She got a small grant from the Economic and Social Research Council to set up a database of social interventions. In 1995, her brainchild became the Centre for the Evaluation of Health Promotion and Social Intervention: the EPI-Centre.

Given the level of welfare spending that all European states support, academics who study the effect of social policy are unlikely to want for raw material. Health promotion in particular has boomed recently. For those who find it difficult to keep track of National Health Service reorganisations, the latest upheaval (which came into effect in April) herded GPs and community nurses into "primary care groups" responsible for about 100,000 people. As one of their tasks is to promote health, there will be no let-up in the efforts being made to cajole you to look after yourself better.

But how many of these well-intentioned schemes achieve anything? That is what we should be asking, Oakley says. "There are still too few well-designed evaluations of health promotion - and by well-designed I mean something that has a control group."

Why so little interest? "Professionals - and this was just as true of medicine - feel that they are the experts. It took medicine a long time to get round to questioning whether this sort of professional intuition is a sufficient basis for intervening in people's lives. We are surrounded by professionals who tell us they know what is best, whether it is social welfare, education, social policy or health promotion."

Perhaps the relative lack of enthusiasm for evaluation reflects real methodological problems in trying to assess something more complicated than the effects of a new drug. Oakley is sceptical. "I don't believe the problems are any different. Something works or it doesn't. In fact, you could argue that the case for evaluation in health promotion is even stronger than elsewhere in medicine because the people you are dealing with are not ill in the first place."

The notion that misconceived health promotion could be damaging is seldom heard. The implicit thinking appears to be that if it works, good; if it doesn't, no harm is done. But Oakley points to evidence that bad health promotion can undermine good health or achieve the opposite of the intended result. In the United States, a trial of new attempts to dissuade young people from having sex prompted boys to go out and look for it. A British study of the effect of having health visitors make more vigorous attempts to prevent old people falling down and breaking bones seemed to increase the fracture rate.

Some social scientists, Oakley says, continue to argue that their interventions cannot be easily judged. "In health promotion, there is a lot of talk about things such as community development and empowerment. The argument is that it is not about simple or single actions. The outcome may be complicated, or there may be different outcomes that are relevant. You may be interested in what happens to communities rather than individuals. But all these things can apply just as much to medicine."

Oakley sees a pattern in the resistance to some research methodologies: "These questions are bound up with professional identity. Some people in health promotion are doing 'collective bonding' around resistance to trials. A similar thing happened with feminist social science. There were feminist social scientists who would say you must only use qualitative methods. You could not be a proper feminist social scientist if you were not using them. If you relied on numbers you had to apologise for them."

The EPI-Centre maintains one database comprising 8,000 health promotion studies and another on the effectiveness of specific interventions. Ros Weston, EPI-Centre's director for health promotion, reckons the centre has handled about 400 queries in the past 18 months. "Some of them come from people doing literature reviews. But many are from people in practice trying to convince policy-makers that a scheme they want to set up really will work."

Oakley has kept her own hand in at research. With colleagues at University College London, for example, she is running a Medical Research Council-funded trial of peer-delivered sex education - currently very fashionable, she says. "We're looking at the effectiveness of three sex education sessions delivered by 16 to17-year-olds to 13 to 14- year-olds, and comparing that with teacher-led education."

Like her other projects, it has - naturally - been designed to allow for an objective evaluation of its findings. And a glance through some of Oakley's recent publications reminds you just how highly she rates this aspect of research. To quote a rhetorical question she posed in one recent review: "Who protects the public from the hard-sell techniques of health salespeople when the dominant paradigm is the unevaluated or poorly evaluated intervention?"

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored