Output: still not really convinced

Views into the Chinese Room

四月 11, 2003

John Searle, an Oxford-trained philosopher of the mind who has spent most of his career at the University of California, Berkeley, first opened the door of his Chinese Room in 1980. It purported to show that the conscious mind cannot, in principle, work like a digital computer.

A computer can be programmed to output one set of symbols in response to an input of another set of symbols. From the outside, it can look as though it is "thinking" and even holding a conversation. Input:

"How are you today?" Output: "I am very well, thank you!" In 1950, the British mathematician Alan Turing, fresh from cracking the Enigma code, declared that a conversational computer of this kind should be deemed intelligent if a questioner could not tell from its answers that it was not a person. This is the Turing test for machine intelligence and/or consciousness.

Thirty years on, Searle begged to differ. Those questions and answers could never have any meaning for the computer, therefore it could never be conscious or intelligent. Because we interpret its inputs and outputs as useful information, we treat the computer's hidden workings as meaningful information processing. But this is not intrinsic to the computational system. This is what the Chinese Room story illustrates.

The Chinese Room is a fictitious locked space in which Searle, who speaks no Chinese, has a supply of Chinese symbols, together with instructions for using them (in English). When Chinese characters are passed in to him, the person consults the instructions and passes out more symbols. Neither the input nor the output means anything to the operator in the room, but if his instructions are good enough, it will look to the outsider as though he is answering in Chinese the questions in Chinese that are passed to him. That, claims Searle, is the situation with the Turing test. Only from the outside does the computer appear to understand the questions. Inside, all is a shuffling of meaningless symbols.

For more than 20 years, this imaginary scene has been the focus of debate over artificial intelligence (AI). By 1997, Searle could count more than 100 published attacks on the Chinese Room argument (CRA). His chief adversary, Daniel Dennett, quipped that Searle might be able to count the attacks, "but I guess he can't read them" or he would not continue trotting out the same argument. Dennett scorned the Chinese Room, because "just about everyone who knows anything about the field dismissed it long ago".

Searle is unrepentant and continues to argue that none of the attacks has succeeded in destroying his argument or its message: computer programs consist of syntax (formal rules); minds have semantic (meaningful) content; syntax by itself is not the same as, nor by itself sufficient for, semantics. Therefore, computer programs are not minds, and vice versa.

This background is necessary to appreciate the book under review. It was produced to mark the 21st birthday of the Chinese Room and bears witness to the argument's continuing fascination, Dennett notwithstanding. The subtitle proclaims the volume to consist of 20 "new essays". Technically, they are, but the phrase must be interpreted broadly. Two of the contributions (by Ned Block and Roger Penrose) contain sufficient reprinted material to require formal acknowledgement to the earlier publisher. And Terry Winograd's chapter, although previously unpublished, admits to being a lightly revised version of a 1979 article, prepared as a commentary on Searle's original Chinese Room paper but missing the publication deadline.

The collection kicks off with an excellent introduction by co-editor John Preston, who outlines the history and key issues of the CRA. This is followed by Searle's own contribution. He does not respond directly to any of the other articles in the book, or indeed to any specific attacks on his views. He reflects instead on the way "twenty-one years in the Chinese Room" have laid bare some weaknesses, not just in AI, but in contemporary intellectual life generally. One of these is a failure to distinguish between features of the world that are objective (observer-independent) and those that are subjective (observer-dependent). Related to this is an unwillingness to accept that ontologically subjective phenomena, such as mental states, can and should be a proper field of study for epistemically objective science.

Most contributions to the book are critical of Searle's CRA. Some tackle its basic tenets: Mark Bishop questions Searle's assumption that syntax is not intrinsic to physics, and John Haugeland claims that semantics intrinsic to computers and their programs do exist. Igor Aleksander, a tireless promoter of practical applications of machine intelligence, also challenges the view that computation is limited to meaningless ("non-intentional") symbol manipulation. He claims that in what he calls neurocomputing, there is genuine understanding by artificial systems, when imagined depictions are stimulated in response to language. Herbert Simon and Stuart Eisenstadt claim that the empirical evidence for "understanding" is stronger in computers than in humans, while Diane Proudfoot asks whether we should even care about a machine's understanding or lack of it, providing it functions adequately.

Larry Hauser and Kevin Warwick draw on empirical studies and robot research, as do - in a surprising way - Selmer Bringsjord and Ron Noel, who use evidence from robot research to support Searle against critics such as Dennett. Alison Adam discusses approaches that seek to blur the distinction between humans and machines altogether, in her chapter "Cyborgs in the Chinese Room". From such a perspective, arguments such as Searle's about where the boundary should be drawn are less interesting than actor network theory and Donna Harraway's cyborg feminism, which seek to remove rather than just reposition the boundary.

Penrose, the Oxford mathematician who is normally thought of as Searle's ally in the battle against strong AI, uses his contribution to defend himself against Searle's criticisms and to launch a counter-attack. He disagrees with the view, expressed by Searle, that at a trivial level "everything is a digital computer". The philosophers, he says, have been led astray here by the computer people, who in their turn have been led astray by the physicists. Searle and many others misunderstand the term "computability". Computability is an absolute mathematical notion that is quite independent of the level of description that is used. So when Searle says that Penrose's argument is false because of an illegitimate moving between levels, "Searle's argument is simply wrong".

A number of critics focus on the CRA in relation to the Turing test. Jack Copeland attacks the logic of Searle's argument, accusing him of failing to distinguish between different aspects of Turing's work (such as his "O-machine" and the "universal Turing machine"). Georges Rey agrees that the Turing test cannot survive the CRA, but says this matters only if AI is treated as a behaviourist theory, and Rey defends it as a functionalist theory. Jeff Coulter and Wes Sharrock also agree that the CRA shows a flaw in the Turing test, but they challenge Searle on his theme of observer dependence/independence, accusing him of being a Cartesian dualist.

Perhaps the most fascinating chapter is by Stevan Harnad, former editor of Behavioral and Brain Sciences , the peer-commentary journal in which Searle's original Chinese Room paper was published. Harnad recalls being unimpressed by the submission, and then frustrated that as editor, he could not join in the subsequent scrap. "I felt that I could settle Searle's wagon if I had a chance, and put an end to the rather repetitious and unresolved controversy." But when he entered the debate, in the form of an online discussion group, he found "such a litany of unspeakably bad anti-Searle arguments" that he spent his time defending Searle "instead of burying him, as I had intended to do". Along the way, he did publish a critique in another journal, but it was ignored by Searle. In a brilliant contribution, Harnard not only relates this history, but also sets out a careful and ultimately supportive analysis of the CRA. He suggests that Searle has not always helped his own case, partly because, over two decades and in response to many attacks, his argument has sometimes shifted.

This is an excellent gathering of scholars but a notable absentee is Dennett. My guess is he was invited to join the party and declined, on the grounds that he has already demolished the CRA. The absence is to be regretted. Not Hamlet without the prince, exactly; more like the Scottish play without the ghost.

Anthony Freeman is managing editor, Journal of Consciousness Studies .

Views into the Chinese Room: New Essays on Searle and Artificial Intelligence

Editor - John Preston and Mark Bishop
ISBN - 0 19 825057 6 and 9257 7
Publisher - Clarendon Press, Oxford
Price - £50.00 and £16.99
Pages - 410

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.