Over the course of an animal's life it will have to make many decisions: who to mate with, when to have offspring, how many to have, and whether to abandon them if its mate dies, trading off energy wasted rearing youngsters in favour of conserving energy for a second batch who might fare better. These life history strategies, as they are known, are complex decisions that are made unconsciously by animals. How do they do it? There's no decision-making gene for life-time planning. According to Horst Hendrik-Jansen, author of Catching Ourselves in the Act, behavioural ethologists are unable to solve the dilemma. The question is linked to the problem of our own minds. How do we learn to think? Hendrik-Jansen's complex answer unites the human mind, animal behaviour and robot evolution. His central thesis is that a person or an animal develops thinking and decision-making strategies by interacting with others and the environment throughout its development. Neither we nor animals contain a blue-print - a set of instructions for how our behaviour will unfold.
For the past 40 years, a computational model of the mind has been used to help us understand decision making. Yet Hendrik-Jansen writes: "No machine is ever likely to produce an adequate explanatory analogy for the human brain or mind." In his opinion, mechanistic models are too limited and simplistic. Artificial intelligence has tried and failed to produce a computer system capable of exhibiting intelligent, flexible behaviour. Computers are essentially smart when it comes to formal logic, but lacking in behavioural and psychological traits. Our minds, in contrast, are inextricably linked to our bodies and our behaviour, yet it is possible to determine how a computer works without referring to the specific tasks it normally undertakes. Other parts of the body can be explained using a reductionist, mechanical model; for example, a pump operates in essentially the same way as a heart. We have grown used to attempting to explain the mind using reductionist and mechanical analogies: a computer, writes Hendrik-Jansen, "`is an entity whose meaning is contained in itself".
This is the key to his explanation of human development. Our behaviour emerges; by interacting with others, we are boot-strapped from one developmental stage to the next. Our acts are not intentional to begin with, in the sense that young children do not operate as if they understand what others are thinking, although we certainly treat them as if they, like us, have access to other minds. Hendrik-Jansen uses an analogy from a new breed of robots. Built to resemble insects, their overall behaviour is not programmed, but emerges from a series of low-level behaviours running in parallel, and through interacting with the environment. For instance, a cockroach-like robot built at Massachusetts Institute of Technology has a suite of behaviours such as "follow wall", "head for energy source", "keep walking if no wall present". When these simple instructions run in tandem, the robot will avoid obstacles and recharge itself. Its activity cannot be predicted on a step-by-step basis, yet relatively complex animal-like behaviour emerges.
Catching Ourselves in the Act is an admirable attempt to tackle a difficult subject. Hendrik-Jansen's comments on the inadequacy of computational models to explain the mind rings true. While it is also correct that behaviour does emerge as a consequence of interacting with members of the same species and with the environment, this is not the whole explanation by any means. One has only to think of cases where this process has been subverted: chaffinches brought up with other song birds are still able to sing, and they still show the same window for learning songs when they are fledglings. Their songs, however, will be a mutated version of their foster-parents'. Children reared in isolation behave abnormally and often appear retarded. Their behaviour in normal conditions emerges, but there does seem to be a genetic blueprint for development which requires social stimulation to activate it. In the absence of species-specific stimuli, behaviour alters to some extent, but not on a fundamental level. Moreover, using analogies from robotics can help clarify some developmental processes, but again, this cannot be the whole story; borrowing principles from animal behaviour will certainly benefit robotics, but currently our most complex animal-like robots possess the intelligence of an amoeba and are unlikely to be able to explain the workings of a monkey's mind, let alone a child's. Hendrik-Jansen has not arrived at a fully fledged solution, although this is unsurprising given the perplexing nature of the topic. However, his book is densely written and convoluted; it seems suitable only for an elite core of artificial intelligence enthusiasts, which is a shame given the stimulating ideas he tackles.
In contrast, Artificial Intelligence and Scientific Method is crisp, clear and concise. Instead of examining whether AI is a decent model for the mind, Donald Gillies discusses whether AI is a good model for the process of scientific thinking. "A liquor strained from countless grapes ... ripe and fully seasoned, collected in clusters, and gathered, and then squeezed in the press, and finally purified and clarified in the vat" - thus Francis Bacon describes the heady wine of scientific knowledge. The grapes, so conscientiously gathered, are countless observations, and the resulting clear, pure liquid is the intoxicating brew of scientific discovery. Karl Popper disagreed with Bacon's so called "inductive reasoning". He argued that we do not randomly collect facts before creating a hypothesis: we generate theories and collect facts according to our beliefs. If the facts do not fit the theory, it is falsified and a new theory is produced. Gillies, by giving examples of scientific breakthroughs, shows that we are impure Popperians. Although almost no one collects data without having made some kind of assumptions, we do tend to use induction, albeit sparingly. But in the imperfect world of scientific thought, if the facts do not fit the hypothesis, we sometimes think the facts are wrong. "A beautiful theory, killed by a nasty, ugly little fact," lamented biologist Thomas Huxley, and Francis Crick, co-discoverer of the double helix, commented: "A theory that fits all the facts is bound to be wrong".
Bacon thought that one day we could create a mechanical mind. Now that AI has arrived on the scene, how would a formal logic machine go about solving scientific dilemmas? Armed with superior calculating skills, a prodigious memory, no need to take time off, no biases, and no boredom threshold, does AI promise to be the perfect Baconian hypothesis generator? Gillies argues that AI can and does solve problems using induction, but background knowledge is also incorporated. Thus, although mechanical induction is taking place, AI relies on people to provide a Popperian flavour. This, apart from validating Bacon more than 300 years later, should assure scaremongers that computers will never take over the world, they will always need to rely on us - at the very least, we can pull the plug. By examining scientific conundrums, AI should stimulate us to greater creativity and help generate problems which only the illogical and intuitive mind may be able to solve. But to give Bacon the last word, "the art of discovery may advance, as discoveries advance".
Sanjida O'Connell holds a PhD in psychology and is the author of a novel, Theory of Mind.
Artificial Intelligence and Scientific Method
Author - Donald Gillies
ISBN - 0 19 875158 3 and 875159 1
Publisher - Oxford University Press
Price - £35.00 and £11.99
Pages - 176