Is Wolfram's theory going to change science, asks Philip Anderson.
Stephen Wolfram is well known in the world of science as a notably brash long-term prodigy who has grown his invention of the software system called Mathematica into a commercially successful company. (Mathematica allows the manipulation of mathematical expressions as easily as conventional programs deal with simple numerical data. I am told it is marvellous to work with.) Now Wolfram reveals what he has been working on for two decades in his spare time. While a company president, he has produced a book requiring, as he boasts, "100 million keystrokes and a hundred miles of mouse travel". The result is A New Kind of Science , lavishly produced by his company, Wolfram Media Inc, and launched with considerable fanfare. He acknowledges a great deal of assistance from his employees, not least with the extensive illustrations.
What is this new kind of science? It grows out of Wolfram's long fascination with a particularly simple type of computer program called a "cellular automaton" (CA). The computer starts with an array of cells extending indefinitely in one or sometimes more dimensions, each with a colour - in simple examples, just black or white. Periodically, the colours are changed, or "updated" according to a rule of dependence on the present colours of the cell and its near-neighbours. For instance, if a cell has neighbours to right and left that are currently black, we may choose to switch it to white; if the left and right neighbours are white, however, we may make it the rule that it stays black; and so on. In the very simplest non-trivial cases, the rule involves only the cell and its nearest neighbours; of these rules there are fewer than 128 independent examples. At first, you would think that nothing exciting could happen in such a trivial game. But in the 1970s, John Conway showed that a 2-D version that he called the Game of Life did some very surprising things.
The epiphany that led to the present book was Wolfram's discovery in 1984 that among these 128 cases there are a few that make patterns of great and growing "complexity", even when one starts the automaton from very simple patterns, or from a perfectly random array. The complexity of the patterns is not only quite obvious from the many beautiful figures he provides, it is mathematically provable. One of the longer and more difficult demonstrations in the book (in chapter 12) is that at least one of the 128 cases can be used to simulate a Turing machine to carry out any computation that can be done. (Incidentally, Wolfram would seem to have thereby demolished the "intelligent design" argument for a deity, since in principle this Turing machine can design any object it likes, and thus to create a complex object seems to require no external intelligence at all.) The 1984 epiphany focused Wolfram's attention on what he calls the "problem of complexity". How is it that the universe is so complex and various, even containing the ultimate complexity of intelligent life, when the basic quantum field theory that governs it becomes increasingly simple as it is revealed to us? His answer is that the underlying theory provides only a set of "rules" for updating the state of the universe, an iterated mapping of the state on to itself, and that the simplest possible set of such rules is already capable of resulting in arbitrary degrees of complexity if allowed to operate long enough.
That is a remarkable discovery, certainly adequate to justify writing a book expanding on its generalisations and its implications for all kinds of situations. Perhaps the book need not have come to 1,200 pages, but much of the detail is fascinating; the patterns generated by these and various more general automata are beautiful; and the intellectual range of subjects covered is impressive.
But I find myself troubled by the claims made for this particular model: that it leads uniquely to a new science of complexity, and represents the sole correct way of dealing with that science. It is also disturbing to find the claim repeatedly made that the scientific world ignored such problems prior to the Great Discovery of 1984, implying that Wolfram is the sole discoverer of his "new kind of science". The concept of "emergence", for example, of which Wolfram's phenomenon is a beautifully clear example, has been in the evolutionary biologists' vocabulary since the 19th century. Indeed, the evolution of present-day life in all its complexity on an originally lifeless planet is a clear demonstration that complexity can emerge from simplicity. It can hardly be said that the explanation of the origin of life from non-life was a neglected subject before 1984. Emergence also entered the physicists' vocabulary, starting in the 1970s with the idea of broken symmetry and the revival of interest in phenomena far from equilibrium. Although the trend in the physical sciences, and recently in biology, has been toward a naive reductionism, this has not been to the total exclusion of other directions.
To pick out an instance of Wolfram's exaggerated claims of which I have personal knowledge, in one of his notes for chapter one he says: "That complexity... could be studied scientifically in its own right I began to emphasise around 1984", and remarks on his pervasive influence: "A notable example was the Santa Fe Institute (SFI), whose orientation towards complexity seems to have been a quite direct consequence of my efforts." As a member for 15 years of the steering committee of the SFI, which oversaw all scientific activity there, I find that statement hard to swallow. Some of Wolfram's collaborators were indeed in the early group, but others such as John Holland and Stuart Kauffman were as influential as they. Wolfram himself was only one of half-a-dozen speakers at the founding workshops in 1984 who talked about various aspects of complexity; he personally had nothing to do with us thereafter.
A New Kind of Science consists of two parts. The main text - 12 chapters and 850 pages long - is supposedly written for the educated (and diligent) layman and is in narrative form, following more or less Wolfram's development of the ideas as they occurred to him. This main text is meant to be self-contained, requiring no specialised knowledge, and contains no references to other work. The second part consists of about 350 pages of notes, giving some literature references and connections to other ideas and approaches. I often found the notes more enlightening than the main text, which takes a single path through the maze of ideas.
An introductory chapter motivates the book by describing how it came to be written and summarising what Wolfram considers to be the fundamental failures of other methodologies. The following chapters are the technical heart of the book, proceeding from his initial discoveries about the ultra-simple 128 cases through a bewildering variety of generalisations. In each variety of cellular automaton he finds the same general behaviour, which he summarises at the end of chapter six under four class headings.
The first two are trivial: CA class one evolves to a given simple pattern no matter what the initial conditions, ie has a fixed point; while CA class two simply remembers what was fed into it, leading to patterns that are "localised", consisting of static, non-interacting pieces. These two are quite numerous, as is CA class three, which seems to evolve into a totally random state for each initial condition. If I were to use more conventional nomenclature, I would describe class three as ergodic: the state samples all of configuration space, or a very large subspace at least. Wolfram goes to considerable lengths to demonstrate the absolute randomness of the contents of a particular cell for a class-three case, which leads to a presumption, at least, of ergodicity. Class four is small and seems intermediate between the random class three and the orderly classes one and two; it is the class that generates complexity. Its intermediate status has been described by some of Wolfram's collaborators as the "edge of chaos", a phrase that became rather a mantra at the SFI as a discovery of Norman Packard.
Chapter seven, on mechanisms, simply describes things that cellular automata can do: make fractals, exhibit chaos, create objects in space and patterns of various kinds. The next two chapters, on "Implications for everyday systems" and "Fundamental physics", will, deservedly, stimulate the most controversy, especially since under everyday systems Wolfram includes all of biology as well as fluid mechanics.
The attempt to model turbulent flow with CAs was very popular for a while, but my impression is that it did not lead to any fundamental new insights. I also believe that Wolfram did not play the unique role in this field that he claims. In biology, a well-known and interesting - but not altogether original - contribution from Wolfram has been to show that simple mechanisms can generate the diverse morphologies of many plants and animals: the leopard's spots, the geometrical patterns of many blossoms, the fractal branching patterns of trees and lungs, the shapes of mollusc shells and so on. But when he goes on to discuss more general issues in biology, he seems to say that the complexity of biological systems arises naturally, and that the role of natural selection is only to prune away unnecessarily complex bits; a view I find simplistic. Even accepting that the CAs are some kind of metaphor for natural processes, only a tiny minority of them (those in class four) evince true complexity - surely it is natural selection that picks these out, not simply the working of natural processes. Also missing from his discussion, somehow, is the crucial step (or steps) by which the CA goes from merely being complex to using that complexity on its own behalf - establishing autonomy, and then exhibiting teleonomy, which Francois Jacob pointed out to be one of the essential characteristics of life. In simple words, how does the automaton get out of the computer?
The chapter on fundamental physics will be at least as disturbing to physicists as the previous one is to biologists. A long section is devoted to irreversibility. Far from being original and epoch-making (Wolfram's introduction to it claims that "in spite of a century of work, the Second Law's origins remain quite mysterious"), the culminating simulation seems only to illustrate neatly Boltzmann's century-old ideas on the subject. When Wolfram goes into fundamental physics questions, he turns out to have caught the contagion that is all too common among those who have been working digitally for too long, when, for instance, he states: "It is my strong belief that" time and space are to be described by discrete mathematics. I'm sorry, it is my strong belief that the words "it is my strong belief that" don't quite carry the day, here as elsewhere in the book.
This particular conviction stumbles on the fact that the deepest and firmest aspects of the standard model in physics relate to the gauge principle, which makes no sense unless the basic symmetries are continuous ones, not discrete. General relativity, the cornerstone of most attempts to go beyond the standard model, is based on continuous space-time. Those few physicists who do not consider discrete space-time too unlikely may find Wolfram's "revolutionary" suggestions weakly motivated. This long section could be paraphrased as: "It would be nice to find a CA to build time and space with, but I haven't."
The chapter on perception and analysis contains one interesting nugget: the idea that free will could be simply the perceived effect of computational irreducibility. But, not being an artificial intelligence expert, I pass on to the last book's two chapters. Here Wolfram brings forward the notions of universality and a "principle of computational equivalence". He introduces "universality" with a characteristically dismissive wave of the hand towards everything that has gone before: "Universality has in the past never been considered seriously in the natural sciences." In which case, what does he think statistical mechanics has been about for the past 30 years, if not the idea of universality brought forward by Leo Kadanoff in the 1960s? Admittedly, in this field, "universality" has become rather a precise technical term, implying exact identity of numerical descriptors of different states or critical points, but many of us have always in mind its origin in the neglect of irrelevant fluctuations, leading to identical or similar equations for physically very different systems. Wolfram should note that he has borrowed and generalised the term, using it in a way that is common to, for instance, much work on modelling and simulation of complex systems.
His principle of computational equivalence is, as far as I can see, the idea of classifying the degree of complexity of anything (a human being, an amoeba, a galaxy) according to one of his four classes of CAs. Is the object computationally irreducible - ie, can it be understood any more easily than by just repeating the computation that made it? This has resonances with ideas of Gregory Chaitin, Charles Bennett, Seth Lloyd and Murray Gell-Mann, on measures of complexity; but they are not mentioned even in the notes. I find the principle unsatisfying because it does not deal with the questions of autonomy and teleonomy mentioned above in relation to biology.
Enough of what the book says and why I and others will find much to criticise - how is it for readability? Wolfram writes clearly, if without humour. The exposition is aided by beautifully produced figures without which the book could rapidly become tedious. The notes are full of erudition on myriad fascinating subjects, although their relevance is not always obvious. He is clear about his basic intention - to make the book self-contained and available to any serious reader - though he warns that following every argument may take months or years. His implication is that the effort will be worth it.
I have to close by recalling the great Russian physicist Lev Davidovitch Landau. He ran a world-renowned seminar with something of an iron hand. One of his most famous remarks would surface whenever a speaker expanded on either the world-shaking importance of his work or on the great difficulties he had overcome. Landau would say: "That is an item in your autobiography - get on with the science!" One wonders whether Landau would have sat still very long for this book.
Philip W. Anderson, Nobel laureate, is professor of physics, Princeton University, New Jersey, US.
A New Kind of Science
Author - Stephen Wolfram
ISBN - 1 57955 008 8
Publisher - Wolfram Media Inc, 100 Trade Center Drive, Champaign, IL 61820, US
Price - £40.00
Pages - 1,197