'Much coming from little'

May 7, 1999

The many ins and outs of dynamic models impress Kenneth Arrow.

John Holland has given an important direction to artificial intelligence (AI). One line of study in this area and one much used in practice is to assume that there are large bodies of knowledge that are, however, confined to a few places or people ("experts"). The experts know a great deal, but their knowledge is not codified in a way that can be used without their direct participation. Since they are scarce, the use of this knowledge is necessarily limited. The "expert systems" approach seeks to codify this knowledge and therefore make it generally useful by asking the expert a sufficiently large number of questions. This approach has been used widely in, for example, medical diagnosis.

Holland has pioneered in a different direction. This is to expose the AI system to experience and to let it learn. "Learning" can be interpreted as letting ideas and principles that "work well" be given greater weight in future applications, while those that do poorly are given less weight or extinguished. The process can easily be regarded as analogous to evolution, in which species grow and diminish in population size according to how they do or do not fit well with the environment. To complete the analogy with evolution, we need one or more mechanisms for introducing new principles of action into the repertoire for selection by survival to work on. In evolutionary theory, these mechanisms are mutation of genes and recombination of chromosomes.

Basing himself on this analogy, Holland introduced the "genetic algorithm" as a general problem-solving device. It was one in which experience modified original beliefs to produce gradually improving solutions. The intended applications are practical decision problems, such as medical diagnosis or decisions about credit worthiness of potential borrowers. The algorithms are usually tested on standard mathematical problems that are known to be complex in a well-defined sense, such as requiring a number of steps for solution that are exponentially increasing in some measure of the scale of the problem.

The genetic algorithm is not the only algorithm that is based solely on feedback between the data and the model at any moment of time. Probably more used today is the neural network. The genetic algorithm is motivated by an analogy with biological evolution, the neural network on an analogy with the mammalian central nervous system (CNS). Linkages are formed between different nodes of a network (analogous to neurons), and these are strengthened or weakened in accordance with reinforcement due to success or inhibition due to failure. The computer simulation of neural networks originated partly in an attempt to better simulate the CNS, but it has become a computing method for AI in its own right.

Genetic algorithms and neural networks are examples of complex dynamic systems. The systems evolve over time in the sense that the state of the system at one time together with some inputs from outside the system determine the state of the system at the next time instant (time unfolds discretely in most of these models). There are typically a large number of variables, but the individual rules that govern the transition from one state to the next may be relatively simple. The observable outcomes of complex dynamic systems are sequences of states, each state having a relatively complex description. Frequently, we observe patterns in these outcome sequences, patterns that are not simply describable in terms of the elements of the system but which nevertheless give the observer some degree of understanding and perhaps predictability. Meteorological theory gives a very detailed molecular description of the movements of the atmosphere and its energetics, but the outcomes yield many descriptions at a much more aggregate level, thus, fronts and low and high-pressure systems. Some regularities appear in the changes of these broader aggregates. Phenomena of this kind are said to be "emergent" - "much coming from little," to quote Holland's description.

This book is an exposition for a general audience of emergence from complex dynamic systems. It does not in principle require any technical knowledge, just the ability to follow sophisticated chains of reasoning. It must be said that this book is demanding. It must also be said that the rewards of understanding it are great. It builds on careful discussions of special cases: a program for learning to play checkers and neural networks based on biological hypotheses about the CNS. It uses these cases to develop a general approach to the development of dynamic models with feedback, indeed, models in which the very structure of the model is itself changed by the outcomes at successive stages.

Holland has been very much influenced by his work with the pioneer Arthur Samuel, who developed a program for learning how to play the game of checkers (or draughts) well. Checkers is a game that can be formalised in the theory of games developed by John von Neumann and Oskar Morgenstern (1944). In principle, each player in a game can be thought of as choosing a "strategy". By this is meant a rule designating what each player will play when he or she has a turn; what is played, of course, depends on the rules of the game, which define the legal moves available, and on the situation in the game insofar as the player knows it. In a game such as checkers or chess, the situation is simply defined by the location of the players on the board. As von Neumann and Morgenstern showed, there is, in a certain sense, a best strategy for each player. More exactly, each player has a strategy that is best provided the other player plays his or her strategy in the pair. The game is therefore, in principle, completely solved. But in fact no one knows the solution, and computing the solution for chess or even checkers is still far beyond all the computing power available to us today.

In the 1950s, Samuel created a program that would direct the play of one player, assuming the other player follows a parallel strategy. To describe all possible states of the board is already a task beyond practical computer capability; so instead, certain features of the state of the board are singled out (how many pieces, how many kings, and certain measures of their location). For each feature, there is a measure of its value, and these values are added over all features. One then makes a move to make the value of the board after the move as big as possible; or, more sophisticatedly, one thinks ahead several steps. The problem is to choose the values for each feature. These are modified by experience according to certain rules. It proved possible to use this program to play a game that would beat experts (including Samuel himself). This phenomenon is certainly a kind of emergence.

Holland also discusses rules for neural networks based on some ideas of neuroscientists. Among others, he shows how a few simple rules lead, for example, to the possibility of indefinite memory, though one would not have guessed that from the individual rules. He then sets forth a general formulation that covers all of these and many other models. A "mechanism" is defined by a set of possible initial states, a set of possible inputs, and rules that determine the state next period from the initial state and the inputs. A "constrained generating procedure" (CGP) is a collection of mechanisms in which the states of some are inputs to others. The structure of the procedure depends on these links; they define the local connections that drive the procedure. The typical CGP has relatively few connections but these, directly and indirectly, induce relations that give rise to the characteristic phenomena of the procedure. The outputs have a structure that can be said to be emergent.

Holland goes on to generalise further, so that the links themselves can be altered as determined by the states of the individual mechanisms. He also illustrates further by examples and applications that cannot be easily summarised here.

He even ventures into the realms of metaphor and innovation, to show how these two can be conceptualised within his general framework of CGPs. But as he emphasises over and over, none of these steps can be regarded as mechanical. Originality, insight and art are essential. Many of his incidental remarks, such as his discussion of Maxwell's thinking, show the depth of the insights he has gleaned, and in themselves make the book worthwhile.

Holland sees many difficulties with the possibility of progress but always has an optimistic attitude towards their eventually being overcome. I must say that I am now more pessimistic about the possibility of indefinite growth of knowledge. A complex dynamic system produces complex outputs. It is true that often, perhaps surprisingly often, these outputs can be described in simple ways, though with a new vocabulary different from that used in describing the underlying system. But it appears to me that very often the outputs cannot be simply described, just as weather forecasting beyond statistical norms seems to be impossible over a period of a week or more. Chaos, as well as emergence, characterises a complex interacting world.

Kenneth J. Arrow is emeritus professor of economics and emeritus professor of operations research, Stanford University, California, United States.

Emergence: From Chaos to Order

Author - John H. Holland
ISBN - 0 19 850409 8
Publisher - Oxford University Press
Price - £22.99
Pages - 258

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments

Have your say

Log in or register to post comments