A quantum leap for computation

Feynman and Computation

September 24, 1999

In this unashamedly digital age, the word computation conjures up the familiar screen on the desk, the ever-slimmer laptop, the internet and the like. But the word has an altogether more elegant and less banal connotation - the representation in machines of the laws that govern our existence, from physics to neurophysiology.

It is well known that the question of what can and cannot be computed in some logical sense exercised Alan Turing's mind in the 1950s. He laid down the principles of assessing general computability by reducing all machines to a simple decision box capable of reading, manipulating and printing symbols stored on a long tape.

In contrast, Southampton University's professor of computation, Anthony Hey, draws attention to a less-known debate on the limits of computation with this collection. The debate was pioneered in the early 1980s by the exuberant US physics Nobel laureate Richard Feynman. He was intrigued by the representation of the laws of physics in machines. He came to the conclusion that conventional dynamic systems (for example the interaction of particles, cosmology and the behaviour of semiconductors) would not be a problem. But quantum physics, he believed, was an interesting challenge.

The difficulty with quantum physics is that it refers to particles such as atoms in which an electron could be in several states of excitation. The little miracle is that from observational frameworks it behaves as if it were in several states at the same time. Feynman used to issue health warnings to anyone who tried too hard to relate quantum effects to familiar physical laws:

"Just relax and enjoy it ... Do not keep saying to yourself, 'But how can it be like that?' Nobody knows how it can be like that."

The challenge comes from the fact that conventional computation is based on the building brick of a switch that can be in only one of two states at any one time - open or closed. Even banks of millions of switches can only be in one state (of many) at a time. Feynman speculated that it is not beyond the limits of physics to make computers in which the fundamental brick is a quantum device. Assuming that such devices might cost roughly as much as a switch, it is clear that computers with prodigious memories could be made. Quantum computing is now an established research field, and Hey's book contains an introduction to the work of Richard Hughes of the Los Alamos National Laboratory. But be warned, this is not for the mathematically insecure. Under "'basic concepts" it tells us that the states of a usable quantum device "...form an orthonormal basis for the Hilbert space of this qubit (quantum bit)".

"Just relax and enjoy it" is good advice. Not to leave it as an exercise in abstraction, Hughes introduces a variety of possible ways in which quantum devices could be built, from laser-cooled trapped ion techniques to nuclear magnetic-resonance devices. The prospect of a quantum computer is going to occupy some good brains for quite some time.

I do not want to give the impression that this is an entirely mathematical book; it is not. It has considerable historical interest. Feynman worked on the creation of courses on physics and computation with the instigation of two eminent colleagues at Caltech, biologist John Hopfield and semiconductor engineer Carver Mead. In fact, Hopfield's presence extended the scope of the discussions to the way in which biological systems, notably the brain, might be carrying out a distributed form of neural computation. The trio created a research group that still exists, called Computation and Neural Systems (CNS - purposefully resonant with the central nervous system).

In an early chapter, Hopfield recalls bringing together Feynman and Francis Crick at Caltech's Athenaeum club. Knowing of Crick's interest in the mechanics of the visual system, Feynman asked why it is that the world does not appear blurred when the eyes saccade from one place to another. Crick's first answer was that the visual system blanks out during saccades; but there was greater depth in Feynman's question. Why does the world seem to be solid when the images on the retinae are continuously changing? Crick subsequently wrote The Astonishing Hypothesis , in which he suggested that compensation for eye (and other) movement is a prerequisite for any part of the brain in which our awareness of a solid world is created.

The third member of the Caltech trio, Mead, writes of his freshman days and the way in which Feynman, his professor of mathematical physics, taught him to think conceptually about electromagnetism rather than rely on differential vector algebra. One of the highlights of the book is a reprint of Mead's recent re-examination of electro-

dynamics as following directly from the interaction of macroscopic quantum systems rather than through the more conventional but awkward route of Maxwell's equations.

This was inspired by Feynman's observation that interactions, instead of evoking forces, can be represented through their power to change wavelengths. Mead's paper and a reprint of Hopfield's 1982 paper on the emergent computational properties of neural networks lend credence to the perspective that some of physics may be viewed as the computational properties of matter.

The book contains much more besides, not necessarily of the same importance, but nonetheless interesting in a complementary sense. Feynman's passion for making systems exceedingly small (a 1959 lecture called "There's plenty of room at the bottom") is interestingly contrasted with Mead's 1994 assessment of the limits to metal-oxide silicon technology, the technology that underpins the current performance of computers and hence the informational infrastructure of the economy of the industrial world. Feynman's passion turned out to be much more important than even he had predicted.

In summary, Hey's book brings to the fore one very important fact of scientific life. The computing methodology that over a period of about 60 years has changed the world is not the only interpretation of what the word "computing" might mean. Thomas Kuhn was right: no matter how deep the hold of a paradigm, some divergent scientist, like a foraging ant who has broken off from the main pack, is out there asking questions about how else it could be done. Feynman was such a man.

Igor Aleksander is professor of neural systems engineering, Imperial College, London.

Feynman and Computation: Exploring the Limits of Computers

Editor - Anthony J. G. Hey
ISBN - 0 7382 0057 3
Publisher - Perseus
Price - £34.50
Pages - 438

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored