Guided Turing

The Legacy of Alan Turing - The Legacy of Alan Turing

September 26, 1997

The first volume of this collection of essays by researchers in computer science, philosophy and cognition looks at "machines and thought" against a background of Alan Turing's pioneering ideas.

In his 1950 paper "Computing, machinery and intelligence", Turing kick-started the field of artificial intelligence. In it he introduced his "Turing test", an "imitation game", in which an interrogator, on the basis of questions and answers, has to tell the difference between a person and a computer in another room. If he is unable to discern which is which, then, according to Turing, his game has answered the question "can machines think?" affirmatively.

Turing's approach sought to separate intelligence from the complications of the conscious/nonconscious dichotomy. Its impact on psychology has been to move the study of cognition towards increasingly computation-oriented models. Turing's central tenet being: thought is the result of computation.

But the Turing test is no substitute for having a theory of mind. Such a theory would give us specific predictions that we could look for. It would take us beyond depending on mere intuition and empathy in determining which entities think. The psychologist Robert French and philosopher Paul Churchland discuss this, but for the notion of "intelligence", carefully curcumventing issues of mind or consciousness. In this they follow Turing.

French and Donald Michie (a pioneer in AI) each discuss how the Turing test might draw out the rich and subtle unconscious associations of human thought. These may be particularly significant if they are responsible for individuating concepts at a basic level, as Christopher Peacocke and Michael Morris consider. But no specific theory is presented. Yet such a theory in regard to, for example, qualia might reveal a hidden structure underpinning consciousness.

In 1935, Turing had the idea for an incredibly abstract formalisation of the concept of computation, the Turing machine; a discrete step device, that works according to fixed rules. This idea captured our intuitive notion of computation, forming the archetype of modern computers.

But Aaron Sloman doubts the appropriateness of Turing machine computation as the key concept underlying intelligence. It is so broad as to pick out trivial things, ones we do not want to consider as intelligent. Even leaves blown about the forest floor form a computation. They solve an equation, the output of which is the description of the pattern of leaves formed, produced under appropriate rules.

In trying to constrain what processes might count as a computation, Chris Fields thinks that it must involve the physical measurement and interpretation of output. It is this that gives it meaning, he argues. But he fails to see that conscious systems have meaning independent of their system/ environment interaction. For example, what you are thinking now needs no behavioural "readout" to establish its meaning. It is fixed internally, separate from any external measurement process. Understanding the mechanisms that do this will be a key issue in future.

J. R. Lucas throws out the notion of mind as machine. In 1961, he presented his stimulating paper "Minds, machines and Godel", that later inspired thinkers such as Roger Penrose. Here he attempts to refute his critics. His arguments utilise Godel's famous incompleteness theorems. These show that algorithms and Turing machines have fundamental limitations. They cannot prove certain of their formulae that nevertheless can be shown to be true by larger systems. Lucas argues that we can always transcend these limitations, and must therefore be more than mere algorithmic Turing machines. But Turing himself pointed out that "our superiority can only be felt on such an occasion in relation to one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines. In short, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on."

Lucas's argument depends on the "nonalgorithmic divine sparks" in mathematical thinking. Supposed examples are, grasping infinite sequences, Godel's proof or intuitive leaps. Robin Grandy discusses the experience of a flash of insight. He could see something as true, but not express how, at least not in a rigorous "effective" sense. Loose trains of thought may be involved, such as "It looks rather like that ..." But this does not rule out a mechanical realisation of mathematical thinking, even though the processes generating the experience are not completely open and transparent to the subject.

Turing proved that a Universal Turing machine can emulate the action of any other Turing machine. This led him, along with Alonzo Church, to establish a supposition regarding the nature and scope of computation, the Church-Turing thesis: "A Turing machine is equivalent to any process that can reasonably be called a computation".

Anthony Galton explores the complexities of this thesis. He discusses the way we might equate such a formal notion of a Turing machine with our informal idea of "reasonably computable". This is difficult stuff. The problem lies, in part, because the Church-Turing thesis is not really amenable to proof.

The second volume concerns a current debate. There are those who argue that discussion of real and distinct concepts, beliefs and experience has no real basis. This is because there seems to be no straightforward way of picking out an individual functional role from within a neural network model. Information associated with a given function is distributed piecemeal throughout the spaghetti-like structure. Furthermore, different functions are superimposed.

The most stimulating articles take the optimistic stance; that artificial modelling could give us an insight into the nature of our concepts and beliefs.

Churchland considers how sudden flashes of conceptual insight might occur in artificial neural networks. By exploiting models and analogies, networks could undergo dramatic discontinuities in an otherwise continuous learning path.

Douglas Hofstadter looks at how we apply analogies. This work centres on his copycat program. It deals with a simple universe of letter strings. The program is shown a letter string being changed, for example ABC goes to ABD. It is then asked to do the same thing to XYZ (with the proviso that Z's successor is not A, as in a circular alphabet). A number of alternatives are possible, with some seeming more appropriate than others.

But flexibility requires stability. For concepts and perceptions to be manipulated they must have a fixed inner character. A currency of concepts is no good if it alters willy-nilly or works only in narrow instances. Creating this may be the job of consciousness. We need to move beyond narrow computational tasks to talk of inner states. The anaemic androgenous term "intelligence", with its emphasis on input and output, may have had its day. The really meaty issues might only come into focus seen through the lens of a theory of consciousness.

Paul Caro is honorary research associate in mathematics, University College London.

The Legacy of Alan Turing: Volume one: Machines and Thought

Editor - Peter Millican and Andy Clark
ISBN - 0 19 823593 3 and 823594 1
Publisher - Oxford University Press
Price - £30.00 ea.vol.
Pages - 297

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored