Overcoming the fear of flying with Joe Public as co-pilot

三月 14, 1997

John Durant argues that any assessment of risk that hopes to command public confidence must mix scientific expertise with unqualified lay opinions.

Living in what the German sociologist Ulrich Beck calls the "Risk Society", we find ourselves in a culture peculiarly preoccupied with hazards. What makes the situation more fraught is that this culture is increasingly ambivalent about expert judgements of all kinds. Nowhere is its ambivalence more evident than in the case of expert scientific judgements about risk.

We have to deal in the first place with the idea of risk assessment as a purely technical matter involving calculations of the probabilities of harm based upon the balance of the available evidence. Viewed in this way, risk assessment is clearly the province of experts alone.

This technocratic view of risk assessment lends support to an equally technocratic view of risk communication, according to which the most important task is to ensure the effective delivery of risk information from those (experts) who possess it to those (lay people) who do not. Here, however, we immediately encounter a difficulty. For once we turn to the question of public perceptions of risk we find ourselves faced with systematic "mis-matches'' between expert and lay risk assessments.

For example, from the point of view of the expert risk assessor many lay people seem inclined to overlook the potential hazard of radiation in the home. As a result, governments in several industrialised countries have found themselves in the curious position of spending money simultaneously to persuade people that radiation in their homes is dangerous while radiation in nuclear power plants is not.

Psychologists have tried to explain apparent mismatches of this kind. Thus, the fact that many people behave as if they believe that driving a car is safer than flying in an aeroplane (when on objective criteria the opposite is the case) has been attributed to a combination of the greater dread associated with plane crashes and the greater personal control associated with driving.

Faced with a mismatch between scientific and lay assessments of the relative risks of driving and flying, few of us are inclined to credit the lay assessment with any particular validity. On the contrary, we are more likely to use the insight to help overcome our own subjective biases in the interests of a more "objective'' view.

In recent years, however, expert judgements about risk have been called into question by supposedly inexpert lay people. Assurances about the safety of nuclear power, chemical pesticides, food irradiation, and British beef have been met with public scepticism. Recent sociological studies have suggested that trust (or rather, lack of it) is often a key factor in explaining why a non-expert may be inclined to disbelieve an expert. Faced with a public which appears sceptical of expert reassurances, many scientists are inclined to look to public ignorance as the most likely explanation; but, faced with the same situation, many sociologists invoke public mistrust instead. The sociologists' interpretation is supported by a number of survey studies which have shown that in particular areas of potential risk (such as biotechnology) consumer and environmental organisations may command higher levels of public confidence than scientific and industrial organisations.

To deal with this issue of trust we need to examine the institutions responsible for risk assessment. How fair and accountable are they? Questions such as these take us to the political stage. Here we confront the need to adapt our policymaking procedures to the altered circumstances of the Risk Society. A good example of this process of adaptation in action is provided by Derek Burke (see above). His example demonstrates why the chain of reasoning about risk that I have described is really a cycle. For we began with scientific risk assessment, and we have now returned to it via the public domain. By taking the public dimensions of risk analysis and risk management seriously, we end up by concluding that there is a need to change the character of the scientific risk analysis and management processes themselves. By granting the principle that, in order to be truly effective, risk assessment must embrace both expert and lay contributions, we have placed upon the scientific community a new obligation to find effective ways of contributing to a process that is more than merely technical.

Clearly, there is no one right way of doing this. What we need, I suggest, is a period of cautious experimentation in the integration of expert and lay perspectives in this risk assessment process in ways that are calculated to command public confidence. In our present position, to do anything less than this would be merely to fiddle while Rome burns.

John Durant is professor of the public understanding of science, Imperial College.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.