Hard facts and erratic people

March 14, 1997

Are risks culturally constructed or objectively measurable? John Adams suggests a model that can accommodate both views. The last time the Royal Society tried to write a report about risk management it failed. The authors could not agree whether risk was something "actual", capable of objective measurement, or whether it was "culturally constructed". Much of the debate arises from a failure to be clear about the nature of different risks. It is helpful to distinguish between directly perceived risks, risks perceived through science, and virtual risks.

Where risks and rewards are directly perceived, as in climbing a tree or driving a car, everyone is a risk manager. Everyone ducks if they see something about to hit them. Before ducking we do not quantify the probability of being hit. This form of risk management is instinctive, but modified by culture. Concepts such as machismo, and prudence, for example, can vary between sexes and cultures and condition our responses to risk.

Attempts, either through regulation or engineering measures, to reduce accidents that are the consequence of taking directly perceived risks commonly end in frustration. When ABS brakes first became available, some insurers offered discounts on the assumption that their superior stopping power would reduce accidents. These discounts are now being withdrawn. Study of motorists' insurance claims for accidents suggests that drivers with the new brakes drove more recklessly because they felt safer. Directly perceptible risks present an intractable problem for risk regulators because they are seeking to control the behaviour of people who persist in being their own judges of what is safe.

Many risks can only be seen with the help of science. Infectious diseases, such as cholera, for instance can only be seen via a microscope and a scientific training is needed to understand what can be seen. Where there is ignorance of probable cause and effect, my model of risk (see box, right) cannot function effectively. If, for example, people do not know that the source of cholera is their contaminated well, their perception of risk cannot usefully inform their behaviour.

In its 1983 report, Risk Assessment, the Royal Society saw itself as assisting the Government in its "duty to . . . make the environment safe, to remove all risk or as much of it as is reasonably possible". Science has impressive achievements to its credit in devising ways of reducing risks. The doubling of life expectancy in most developed countries since the middle of the last century is a testament to scientists' efforts.

But individuals do not always share the Royal Society's objective of reducing risk. Some prefer to take the rewards offered by science. The Davy lamp is often cited as a key safety improvement in the history of mining. It permitted mining's extension into methane-rich atmospheres. But it also led to more explosions. If the number of fatalities per miner/per ton of coal goes down, but the total number of fatalities increases, has mining become safer?

The Department of Trade and Industry, the Department of Health and the Royal Statistical Society are all urging the development of a Richter scale of risk in order to place the management of risk on a more systematic basis. But all the examples so far presented of what such a scale would look like assume that past accident statistics provide a good measure of future risk. According to Heisenberg's uncertainty principle, the act of measuring the location of a particle alters the position of the particle in an unpredictable way. The same problem bedevils attempts to measure "actual'' risk. The purpose of measuring risk is to inform behaviour, which then alters that which has just been measured.

There are many risks which scientists cannot explain, or about which they disagree. In labelling these "virtual risks" I am using the analogy of the computer programmer's virtual reality. It can embody something real, like a simulator for training pilots, or something imaginary, like Space Invaders. Virtual risks are products of the imagination that work upon the imagination. We do not respond blankly to uncertainty; we impose meaning upon it.

Anthropologists, led by Mary Douglas and Michael Thompson, have developed a typology that helps to account for the different meanings imposed on uncertainty. BSE and global warming are topical examples. Some, Douglas and Thompson call them egalitarians, view these threats as punishment for technocratic hubris, and failure to respect a fragile Nature. They urge a retreat to practices that they label sustainable. Others, individualists, consider Nature to be robust and capable of looking after itself, and argue that the best protection in an uncertain world is power over nature; they advocate more science to buttress our defences against any nasty surprises that Nature might have in store. The government, the hierarchists, assure everyone that everything is under control and commission more research that they hope will prove it. And the fatalists, who harbour no illusions about their power to guide events, continue to drink lager and buy lottery tickets; che sara sara.

These characterisations of the contributions that "hard'' scientists and cultural constructionists might make to the understanding of risk can only hint at the possibilities that exist for fruitful collaboration. Clearly we need more information of the sort that only science can provide. Equally clearly we must devise ways of proceeding in the absence of scientific certainty - science will never have all the answers - and in so doing we must acknowledge the scientific elusiveness of risk. People respond to information about risks, and thereby change them.

Consider the ultimate virtual risk, discussed from time to time in the media. Nasa argues for the commitment of vast resources to the development of more powerful H-bombs and delivery systems to enable the world to fend off asteroids, even if the odds of them ever being needed are only one in a million. But we are also told by Russia's defence minister that: "Russia might soon reach the threshold beyond which its rockets and nuclear systems cannot be controlled.'' Which poses the greater danger to life on earth, asteroids or H-bombs and delivery systems out of control?

Debates about BSE, global warming and asteroid defences are debates about the future, which does not exist except in our imaginations. They are debates to which scientists have much to contribute, but not ones that can be left to scientists alone. An understanding of the different ways in which people tend to respond to uncertainty cannot settle arguments. It does offer the prospect of more coherent debate among those with a stake in such issues.

John Adams is reader in geography, University College London.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored