We have the technology . . .

Cognitive Carpentry - Artificial Minds

五月 17, 1996

Train sets, Meccano, Sim City: I've always liked to build working models. And a working model of mind has a special fascination. This is surely sufficient reason for doing AI; but I can also feel a glow of satisfaction at knowing I am helping my neighbouring disciplines survive.

As John Pollock says in Cognitive Carpentry, philosophy needs AI as much as AI needs philosophy. One necessary test of a theory of mind is that we can build an AI system which implements the theory. It behoves philosophers to remember this, for many popular philosophical theories are not implementable.

Stan Franklin takes a similar attitude to psychology, asserting in Artificial Minds that the cognitive scientists, with their lust to build models, understand mind more deeply than psychologists.

But why should such models teach us anything about our own minds? Consider Pollock's work. His aim is a computational theory of rational thought. Taking what philosopher Daniel Dennett calls the "design stance" to AI, he regards rationality as evolution's engineering solution to a difficult design problem. The constraints - logical and computational - on the problem may be so tight that there is only one reasonable solution. If so, then in building a rational machine, we will learn how human rationality necessarily operates.

More specifically, rationality solves the problem of surviving in an uncertain, unstable environment. What mental equipment can we use? First, mechanisms for yielding beliefs about our situation, for example, that it has started snowing. Second, likes and dislikes about general features of situations: we loathe cold. These attitudes are hard-wired to help us keep body temperature and other variables within safe limits. To do so, we must plan a course of action that changes our situation to one we like more: by abandoning our shopping trip, perhaps, and walking home to a warm fire.

This sounds familiar: an agent derives goals from its beliefs, then makes plans to achieve them. But it must choose actions sensibly: if a job could be done equally well using either water or liquid radium, we would hardly regard someone who goes further than the nearest tap as rational. Concentrating on the theory of planning, AI has left evaluation and selection to decision theory, a kind of economic cost-benefit analysis. Pollock combines the two into a unified theory, with useful results on scheduling and action selection.

This "practical cognition" - deciding which actions to adopt - is one component of rational thought. It relies on the other, epistemic cognition, to perform inferences and supply it with beliefs. Computation time is scarce, so epistemic cognition must be driven by practical cognition's interests and not waste time on irrelevant reasoning. When cycling home, it is more important to avoid cars than to plan supper. Pollock has implemented an an interest-driven reasoner based on this principle. His tests suggest it does well, compared with various theorem-provers, at avoiding unnecessary inferences. He also describes a defeasible reasoner (one that can undo existing beliefs as well as generating new ones) which combines ideas from default logic, circumscription and argument-based approaches.

Even with interest-driven reasoning, an agent relying solely on logical inference would be impossibly slow. "Quick and inflexible" modules, tailored to deliver approximately correct results without long deliberation, are also needed. Some - jerking your hand away from heat - make plans. Others, such as our intuitive comparisons of areas, generate beliefs. Pollock integrates these into a common architecture, viewing a rational agent as a bundle of such modules with logical reasoning sitting on top and tweaking their output as required.

Pollock has implemented his theory as a Lisp program, Oscar, available via http://info-center.ccit.arizona.edu/oscar/. His company, Artilects, is applying Oscar to medical decision support, among other problems; I will be interested to see how it scales up to them. In the meantime, his book offers insights into planning, defeasible reasoning, decision theory, and agent architectures, and I recommend it. Although intended for professionals in AI and philosophy, it is fairly self-contained: it requires facility with logic and probability theory, but little knowledge of other topics.

In contrast to Cognitive Carpentry, Franklin's Artificial Minds is written for the nonspecialist; billed as an informal tour of some artificial "mechanisms of mind" and of three AI debates. The diversity of AI would challenge any writer: two major paradigms, symbolic AI and connectionism, and several minor ones, all home to a variety of techniques and approaches. Symbolic AI is based on the theory that we think by manipulating mental symbols (standing for objects, events, etc) according to explicit rules. This is sufficient to explain human intelligence (and, as with Oscar, to make a machine behave intelligently); our explanations do not need to descend to the level of the brain's neural hardware, any more than a programmer need explain his program in terms of logic gates. Connectionist models go deeper, imitating how our neurons operate.

Though the paradigms differ greatly, they usually stand together in their emphasis on modelling isolated mental functions. Some critics argue that we should instead try to understand the whole organism, starting with simple agents like insects. The point is - think of Oscar's practical cognition - that real minds evolved to survive in a complex environment by producing the next action. Our mental functions originated subservient to this end, so this is the context within which we must place our models. Broadly speaking, this is the artificial life or situated agents approach.

Franklin promotes a combination of this "action selection" view with another which - following Marvin Minsky's Society of Mind and the rise of distributed computing - is increasing in popularity, namely that mind is not a hierarchical system overseen by a global controller, but a collection of autonomous modules each devoted to a specialised task. So although he describes one symbolic AI program, Soar (one of a few programs claimed to embody a unified theory of cognition), and some examples of connectionism, as well as the debate between the two, he gives more attention to artificial life and multiple-agents research, including Wilson's Animat (a creature which evolves rules about how to find food) and Pattie Maes's behaviour networks. He also describes Pentti Kanerva's nifty model of sparse distributed memory, and Hofstadter and Mitchell's excellent Copycat analogical reasoner.

These originated mostly between 1985 and 1991. It is good to see a popular writer who does not feel obliged to recapitulate a science's entire development, thus forcing himself to squeeze the quarks and quasars into his final chapter's last few pages. That said, pointers to other popular accounts would help the reader obtain a balanced view. Franklin gives a nice survey of recent work for the general reader, though some of his program descriptions are unclear. More examples would help. Textbooks tend to omit the topics he covers, so Artificial Minds would also interest students.

Jocelyn Paine teaches artificial intelligence in the department of experimental psychology, University of Oxford.

Cognitive Carpentry: A Blueprint for How to Build a Person

Author - John L. Pollock
ISBN - 0 262 16152 4
Publisher - MIT Press
Price - £29.50
Pages - 377

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.