Does anyone still believe that computers are "electronic brains"? If they do, the next time they approach their laptops they might consider that even the simple act of word processing is the result of the laborious effort of an army of programmers. But most of what I do and write, is a result of something that I have learned at some point in my life - programming does not come into it. So learning is at the centre of an obvious and glaring distinction between brain and computer.
This distinction has almost been ignored by computer scientists. As a minor branch of artificial intelligence, learning has been the interest of only a small minority for some decades. This changed in the mid 1980s through the reappearance of neural network enthusiasts whose systems are unashamedly based on learning from examples. But their mathematical methods have more to do with physics or control theory than the logic familiar to the computer scientist, so this field has (mistakenly) been seen as some kind of aberration from the clean world of algorithms. However, whether learning is achieved through logic or neural networks, it should not be believed that the programmer has been let off the hook. She or he needs to invent the algorithms that bring the little miracle of learning about. It is this art which is the subject of Alan Hutchinson's book - how can a computer be programmed so that it can adapt to data and improve its performance? In other words, how can a computer be made to learn?
It is to Hutchinson's credit that he has managed to put the brain-inspired methods of neural systems in the context of other learning methodologies and to have stressed that the distinction between neuron and clever algorithm is not as great as some would like to believe. Hutchinson writes in a compelling and well-organised fashion - it is quite difficult not to be seduced by his enthusiasm.This is the kind of book which will make many academics consider developing a new course on the subject as, with its clear pedagogical attitude and its abundance of questions (not answers) at the end of each chapter, it cries out to be appreciated both by lecturers and students. But perhaps one should not yield too readily to seduction and, instead, ask a few questions about the overall importance of the subject. Are techniques of algorithmic learning central to the computer scientist's ammunition cupboard or is all this a fascinating curiosity that will not unlock many applications.
Some argue that the importance of algorithmic learning lies in the way it illuminates human learning. Hutchinson wisely prefers to concentrate on computational advantages. It is quite likely that learning algorithms only serve to highlight the vast differences between human and machine learning, so the author's preference is an honest acceptance of this fact. He defines learning very broadly as a program which is capable of altering its performance (for the better, of course) in response to the data it is processing. He then argues that this creates new domains of application. The sad fact however, may be that despite the 30 years or so that this topic has been studied, the world of computing is still remarkably bereft of learning algorithm applications. This may explain some of the success of neural computing which, if taken separately from other forms of learning, appears to be enabling new applications in a reasonable way. This is a curious phenomenon as neural methods perform rather crude function approximations - cruder than some of the other logical methods described in the book. But who ever said that the computational world is a subtle place? Nevertheless, the door may not be closed, and books such as this might lead to a revival of interest in the application of algorithmically elegant learning.
The book is organised in a clear and logical order. The first chapter makes the fundamental point that, because learning has been defined as an adaptation of process to data, it is the integrity of that data which determines the success or otherwise of the learned process. This develops into a definition of various forms of learning. Even in humans, learning to ride a bicycle is a very different activity from learning to be a good manager or learning the lines of a Shakespeare play. So it is for computers, algorithms which learn to solve problems are largely different from algorithms which learn to recognise patterns. Adaptation such as that of a slipper to a foot needs to be distinguished from adaptation of a control system to a new set of parameters. An early chapter is devoted to general concepts such as graphs, search, and probability theory - useful stuff in its own right. This is followed in rapid succession by algorithms that map numbers into other numbers (that is, function finding) and neural networks.
The approach to neural systems is concise, perhaps too concise. Anyone wishing to write programs that simulate such networks would be well advised to read more comprehensive introductory texts which now abound. As the book progresses the accent shifts from learning in a statistical or parametric way to rule-based ways of doing things. The latter are interesting and elegant. They will appeal to adherents of formal methods in computing.
So, ten out of ten to Hutchinson for writing a comprehensive and lively book. But what of the field as a whole? Is it important? Is it fully developed? My personal opinion is that it is probably not only the most important area for the further development of computer science but also one that will go through changes that have not even been managed in Alan Hutchinson's book. After all, the collection of algorithms discussed in this book still leaves untouched a vast collection of phenomena that go under the heading of learning. Perhaps the most important of these is the learning of natural language which, despite a rather pessimistic theorem due to Gold (discussed briefly in the book and which comes from an obsession with syntax rather than semantics) is probably the key to the achievement of natural language communication with computers. It is worth watching this particular space.
Igor Aleksander is professor of neural systems engineering at Imperial College, London.
Author - Alan Hutchinson
ISBN - 0 19 853848 0 and 0 19 853766 2
Publisher - Clarendon Press
Price - £.50 and £60
Pages - 434pp