The ghost in the machine

十一月 18, 1994

Most of us are intrigued by our own consciousness. But reading recent books written by eminent people leaves us baffled by what seems to be a demarcation dispute. The explanation of human consciousness has developed the character of a military objective that warring armies are battling to conquer. Neuroscientists line up against philosophers, mathematicians against psychologists. Who has the right to explain consciousness?

In the midst of the trumpets of battle we at Imperial College in London are experimenting with a new neural machine which we call Magnus (Multiple Automata of General Neural Units). Magnus has been used to see if a bunch of artificial neurons can carry out some of the feats that their living counterparts, the cells of human brains, perform in going about their daily lives. The vexing questions are: how much of what Magnus is doing could be described as some form of artificial consciousness? Where does this fit into the battle for an appropriate explanation of human consciousness?

Consciousness is seen by some eminent scientific figures as something improper. It is OK for philosophers to waste their time with it, but real scientists will leave it well alone. Those who do not, are suspect. So I enter this dangerous battleground as the jester at the feast by arguing that an artificial consciousness could explain human consciousness and do some peace-keeping among those academics who do battle over the phenomenon.

To take my argument seriously, it is necessary to suspend the deep contempt that many have for inanimate objects. We are happy to think of a friend as being conscious. But were she to unzip the front of her face and reveal a heap of circuit boards and transistors, our attribution of consciousness might vanish. I suspect that this prejudice gives rise to a common distaste for mechanistic ideas.

Many argue that what is "inanimate'', being without an anima or "soul'', is perforce devoid of consciousness, and therefore worthy of contempt. I see this as a fear of the thunderbolt from above that strikes those who attempt to remove mysticism from the makeup of a human. Reductionism may not be the ogre it is made out to be -- it could help in the long run.

But for any artificial device to be dubbed as "conscious'', the honour needs to be earned through a reasonable conviction that the artificial object can do not only most, but possibly all of those things that lead us to believe that we are conscious. This includes our knowledge of how we gather our experience. Therefore consciousness cannot be found in the programs written over the past 40 or so years under the heading of Artificial Intelligence as these represent isolated and superficial simulations of behaviour. This need not be true of neural systems. Magnus may never get around to playing chess or advising doctors as AI programs do, but it may tell us what it is like to be Magnus and make it possible for us to understand how it manages to do it.

Artificial neural networks are more than just the latest playthings which have come into the hands of computer scientists. These systems are inspired by the processing structures of cells in the brain -- the structures of neurons. Francis Crick's book, The Astonishing Hypothesis, which argues that consciousness is the product of the behaviour of some of the neurons in our brain, makes the point: if the behaviour of real neurons is the basis of real consciousness, it is legitimate to search for artificial consciousness among artificial neurons.

However, with Magnus, we are looking for consciousness in a system that does not pretend to model the brain in great detail, but one which, being inspired by the brain, develops its own knowledge and learns to control this process. The fact that the model and the living brain work in different ways will help us to understand what kind of latitude there can be in the meaning of the word "consciousness'', whether artificial or real. While the consciousness that I can explain is not my own but that of Magnus-like automata, I happen to think that such consciousness is a bit like that which we all know in ourselves but find impossible to describe.

The final judgement as to whether this is the case is left entirely to those who take an interest in the argument. The conscious world of a living individual is a cocooned and private thing. Each of us lives in our own cocoon and we have to imagine what it might be like to be someone else. The fact that another is conscious in the same way as I am, is an act of belief -- one that philosophers have been known to care for. With Magnus I can actually see the sum total of what Magnus is "thinking'' at any one time. And, because I define a process of learning which has "iconic" properties, the display on Magnus's screen actually represents the perceptual events which come into its "consciousness''. In a word, I can get into Magnus's cocoon and know directly what it is like to be Magnus. Further, because I have built Magnus, I know how it works, so I can predict what it will do and explain this to others. I can also provide an explanation of how it is that Magnus's mental world is the way it is. I can explain the artificial consciousness of Magnus.

It would be a mistake to think of Magnus just as a zombie-like creature whose "thoughts'' can be displayed on a screen. In Magnus it is possible to discover behaviour which can be interpreted as "selfhood", "awareness", "will" and "emotion". I have chosen to express a "theory'' of artificial consciousness as a fundamental postulate and 12 corollaries. This is not just a process of giving mathematical respectability to otherwise simple ideas. It is a framework that allows a long sequence of questions about such properties (corollaries) to be answered on the basis of an initial belief: a fundamental postulate. This fundamental postulate is Crick's "astonishing hypothesis'' -- that consciousness is the product of the behaviour of a bunch of neurons in the brain -- transferred into an artificial domain. Many will find this to be a simplistic notion. However, it is full of hidden implications.

The corollaries address the distinguished Oxford mathematician Roger Penrose's belief, expressed in his "Child's View" in The Emperor's New Mind, that "Consciousness seems to me to be such an important phenomenon that I simply cannot believe that it is something just accidentally conjured up by a complicated computation''. The corollaries of artificial consciousness show that important properties of consciousness (including qualia -- the "redness'' of a red boat . . .) are not just accidental products of a complex computation. These corollaries hop across from psychology, linguistics, biology and philosophy providing the interdisciplinarity which, I believe, creates the opportunity of arriving at a consensus among academics drawn from different disciplines who currently disagree about any explanation of consciousness.

Crick argues that once the neural correlates of consciousness have been found in living brains, it becomes possible to build machines that are, in some way, conscious. I find that starting with living brains is too complicated and have put the proposition about conscious machines the other way round. If consciousness is a property of neural material, it will be easier to develop the general view through the synthesis and analysis of the artificial version first. I concede that the artificial and the real cannot be equated on every count. Hence I insist on the phrase "artificial consciousness'' as a usable, explanatory metaphor for what we all feel about our own consciousness. This may also lay to rest once and for all some of the unhelpful mystique in this area. Despite the potentially unpalatable prospect for some philosophers, this view of consciousness might make sense to the majority of the conscious population.

Igor Aleksander is professor of neural systems engineering in the electrical and electronic engineering department at Imperial College, London.



  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please 登录 or 注册 to read this article.