This book is intended as an introduction to the subject of artificial neural networks at approximately third-year undergraduate level with a background of approximately first-year university calculus and linear algebra. For more advanced mathematical material the reader is referred onward to texts such as those by Hertz, Krough, and Palmer (1990) and Haykin (1994). There is now a plethora of textbooks on neural computing. What was an open field for all comers is now a seriously competitive market. So how does this latest effort contribute? There is an excellent chronology of historical milestones from Rashevsky's original 1938 studies of neurodynamics formulated in terms of differential equations to Minsky and Papert's results on the limitations of single layer perceptrons. The remainder of the first chapter introduces many of the notations and terminologies required to understand developments in artificial neural networks in the subsequent 30 years. Supervised learning algorithms for conventional single layer and multilayer feedforward networks are discussed in Chapters 2 and 3. In Chapter 4 some variations of these ideas which can speed up training or help optimise a feedforward architecture for a particular problem class, such as modular decomposition of function and pruning, are described. Later in Chapter 4 problems associated with the prediction of time-series or control of some dynamic systems are addressed using a combination of feedforward networks, recurrent networks and regularisation theory. Not all this material is of interest to a beginner but it is well explained. Unsupervised learning in the form of winner-takes-all networks, learning vector quantisers, counterpropagation networks, adaptive resonance theory and topologically self-organising networks a la Kohonen, with several other models, are discussed in Chapter 5. The rest of the book gives a guided tour through associative memory models, Hopfield networks and Boltzmann machines, predictable but quite nicely done.
The book has a good bibliography, index, many nice examples and includes excellent end-of-chapter exercises. The authors make themselves available over the Internet for further discussion. So as an introductory textbook making relatively modest demands on the mathematical abilities of the students it scores well and all credit to the authors and to the production team at MIT Press. However, given the contents, the book could have been written several years ago and now faces hot competition.
The subject of artificial neural networks is expanding so rapidly that it is hard for an introductory textbook to go from the little-or-no knowledge level to material which captures the excitement and power of recent developments. In building a conventional feedforward neural network all one is really doing is constructing a differentiable non-linear input-output model from a particular data set; a model which we hope will generalise well on unseen data. The reason why one might choose to use a feedforward neural network to construct such a model, rather than Fourier series or some other non-linear universal function approximation scheme, has a lot to do with taste and vogue and maybe a bit to do with the desire for parallelism. Viewed from this perspective conventional feedforward neural networks fall naturally into the province of mathematics and the science of parallel algorithms.
One can argue a powerful case for neurodynamical simulations, but these are mainly based on differential-equation neural models, rather than the stylised neural model of feedforward networks, in which case the mathematics gets seriously difficult. In either case it is pedagogically unsound to pretend that the subject is not mathematical.
Antonia Jones is professor of evolutionary and neural computing, University of Wales, Cardiff.
Elements of Artificial Neural Networks
Author - Kishan Mehrotra, Chilukuri K. Mohan and Sanjay Ranka
ISBN - 0 262 13328 8
Publisher - Bradford Books, The MIT Press
Price - £32.50
Pages - 384