Whitfield Diffie reflects on the life and achievements of Alan Turing, who had an infinitely creative mind but felt he had a finite number of choices.
Alan Turing, like Marilyn Monroe and John F. Kennedy, died in, or at least close to, the prime of life. Fifty years ago, Turing committed suicide, aged 42, after being hounded for being homosexual in an era that could not have imagined gay marriage. We are left, therefore, with the image of a handsome young man capable of running the marathon at almost the top level rather than of an aged professor enjoying the adulation of his intellectual progeny at a colloquium to celebrate his 90th birthday.
Nonetheless, a festschrift is what Alan Turing: Life and Legacy of a Great Thinker is. It is an impressively comprehensive and varied work that combines essays (and even a play) that study the history of Turing's work with papers that present new results and surveys of the areas of research Turing pioneered.
The book is divided into five parts. The first begins with a capsule biography by Andrew Hodges, the author of a biography of Turing longer than this volume, and continues with a short play, some speculation about what Turing might have done had he lived longer, and a sociological analysis of information as a force in modern society. The next two parts are the intellectual core of the book, dealing with computation and artificial intelligence. The fourth part sketches Turing's work on cryptography in the Second World War, and the last part examines the interests of his final years in aspects of what we would now call self-organising systems.
Reading about a man whose work covered such a broad range is never easy and few people other than reviewers will feel obliged to read all 21 contributions. Nor is describing such a book easy. A lot of background would seem to be in order.
The most important new scientific idea in the 20th century - more important than computing or radio or powered flight or nuclear power - was the change in our view of our position in the universe. After 1930, we could be sure there were galaxies outside our own. By the end of the century, the universe was known to be 15 billion years old and as many light years across. At the other end of the scale, we discovered the small. Einstein's doctoral thesis of 1905 was on the dimensions of molecules, which helped to establish atoms and molecules as something real rather than simply an artefact of chemical theory. Today that ambiguous status is enjoyed by strings: tiny building blocks that are as much smaller than atoms as atoms are smaller than solar systems.
The past century has also brought insight into where we stand mathematically, logically, computationally. When David Hilbert, one of the most celebrated mathematicians of his day, laid out his famous set of problems in 1900, he asked for a procedure to determine whether certain equations had solutions of a certain form. He presumably did not think there could be no such procedure. Mathematicians presumed that all problems could, and eventually would, be solved.
The work of Kurt Gödel, Alonzo Church, Turing and others in the 1930s changed all that. What resulted is generally called recursive function theory but is better characterised as the theory of computability. In order to talk about limits on computing, one must have a definition of "compute".
Several were proposed. Remarkably, they all turned out to be equivalent, to give the same answers to a question of the following form: can we be certain that we can compute this in a finite amount of time? The formulation that stuck in both the popular and the specialist mind was the Turing machine.
A Turing machine is perhaps the simplest device that captures all the essential characteristics of the modern computer. It has a central processing unit containing a set of rules. It uses the rules to manipulate the symbols it reads from and writes on its other component: an infinite tape that plays the role of high-speed memory and disk combined. Its input is a finite sequence of symbols written on the tape before the machine starts computing. The answer is the sequence written on the tape when the machine stops. A sequence of symbols is said to be computable if there is some Turing machine that halts with just one symbol on its tape when it is started with a number representing the position of that symbol in the sequence.
So far a Turing machine sounds more like a specialised calculator than a general-purpose computer. Turing's unwillingness to distinguish between the two was his great insight. He showed that there were universal Turing machines that could simulate any other Turing machine if a description of the other machine was written on the tape before the input of the problem.
That description we would now call a program.
One crucial consequence of the study of computability was the identification of a class of functions that could be computed by Turing machines or a number of other, generally less natural, formalisms. The view that the computable functions correctly capture the notion of mechanical computability is called the Church-Turing thesis.
The theory of computable functions - the Godel-Church-Turing theory - is crude: it distinguishes the finite from the infinite, no more. In recursive function theory a problem is called solvable or decidable , depending on context, if it is certain the answer can be found in a finite number of steps. Finite does not mean small, or even the biggest number one can think of. The theory does not distinguish between one, the number of atoms in the universe or the number ten raised to the power of the number of atoms in the universe. All are finite and negligible compared with the infinite.
A remarkable number of problems have proven to be undecidable, most notably the halting problem of whether a computer running an arbitrary program on arbitrary inputs will finish its work and announce an answer or will continue to compute indefinitely. Demonstrating the undecidability of the halting problem was the early accomplishment on which Turing built his career.
But, unlike mathematicians such as Norbert Weiner, who applied one important insight to many different problems, Turing had original thoughts in many areas. He is best remembered for the Turing machine, the Turing bombe and the Turing test. There is, however, often some confusion of these concepts, and this volume is an example. The Turing bombe was a specialised computing device employed at Bletchley Park (and later in the US) to attack the German division-level cryptographic machine, Enigma. It does not work like a Turing machine, does not compute with zeros and ones, is in no sense universal and is not the ancestor of the modern computer.
Other work Turing did in the war that does not bear his name does underlie modern computing. He worked on Colossus, the machine built to attack high-level German cryptosystems. Though some call Colossus the first computer, in my view it falls short of this because it was not a stored-program computer. It was not a universal Turing machine: it could not be programmed by changing the zeros and ones on a tape. To change its behaviour one had to replug wires that determined its rules. We might say that every day it was a different Turing machine. But Colossus led to the Manchester Baby, a prime contender for the first true computer. Curiously, this volume only touches on Colossus, and not in the chapters devoted to cryptography.
Turing continued with computing after the war. He combined inspired engineering on computers at the National Physical Laboratory and Manchester University with consideration of computing's future potential. Although there is much argument about whether thought and computing are the same thing, there is little doubt that if computing is not thinking, it is more like thinking than is any other physical phenomenon. This gives rise to another problem of definition: how would we know if a machine is capable of thinking?
Turing's answer, the Turing test, is that a computer should be considered capable of thought or possessed of intelligence if it can converse (via email, for example) so adeptly that its human correspondents are unable to distinguish it from a human being.
In places in these essays, the Turing machine and the Turing test seem to coalesce into the question: can Turing machines think? This question is then interwoven with two others: is the Church-Turing thesis correct? Are the Turing computable functions really all of the computable functions? The discussions of the prospects for hypercomputation dwell on the well-known fact that Turing machines do not make very good computers and that recursive function theory is far from a theory of every possible aspect of computing, none of which bears on whether there are buildable machines that compute a larger class of functions. Two essays on this subject, in which the score is Authors 100, Straw Men 0, are rebutted in a piece by Martin Davis, which sums up their results as "little more than the obvious comment that if non-computable inputs are permitted, then non-computable outputs are attainable".
Interspersed with these essentially philosophical essays are some of a more technical nature. I particularly enjoyed Michael Beeson's discussion of the mechanisation of mathematics; and blunted my mind on one titled "Implementation of a self-replicating universal Turing machine".
Although this collection makes for fascinating reading and belongs next to the Hodges biography on the shelf of any Turing fan, it leaves me with a keen sense of longing. It deals with profound questions, yet its arguments are generally unpersuasive. Much of it follows a style of reasoning I associate more with theologians, literary critics and communists than with scientists and mathematicians. The contributors look to Turing's writings not for historical insight, or even for inspiration, but for justification of their own positions by repeatedly analysing the original texts to determine what Turing really said or really meant. The final impression is of a group of people who are not so much scholars in fields to which Turing gave birth but scholars of Turing himself.
Whitfield Diffie is chief security officer, Sun Microsystems, and one of the inventors of public-key cryptography.
Alan Turing: Life and Legacy of a Great Thinker
Editor - Christof Teuscher
Publisher - Springer
Pages - 542
Price - £46.00
ISBN - 3 540 20020 7