Building machines that can visualise objects they have never before encountered is a momentous task. But can we do it now?
The simple act of seeing raises difficult questions for those interested in how the brain works. How do we recognise something we have never seen before? How do we visualise something we have never actually seen?
For the past six or so years, we have realised that the prowess of the human visual system goes far beyond that achieved by conventional computation and by approaches that use artificial neural networks.
Something that simply labels patterns, as most of these systems do, does not have visual awareness - the ability to recognise what has not been encountered previously or to visualise objects described by words.
In 1984, together with Brunel University and Computer Recognition Machines of Wokingham, we built Wisard, the first neural-pattern recognition machine. But the glaring difference between the way I feel when I recognise patterns and the way Wisard and other neural pattern recognisers seem to do it soon indicated that "doing it like the brain" needed a new approach. So, in the early 1990s we developed, again with Brunel, Magnus (Multi-automata general neural unit structure). In Magnus we could build many neural automata, run them simultaneously and study how new properties emerge from their interaction.
The recent task tackled with Magnus has been to create a system with the ability to see something, consider it and describe it in words. But above all, the words should also conjure up images. Like in the brain, what is "seen" by Magnus is not just a projection of the content of the retina, but a reconstruction in a special "awareness area".
In developing Magnus, I was influenced by Francis Crick's and Christof Koch's "Astonishing Hypothesis" that anything we are aware of must have an exact representation in the firing of some neurons in our brain. This leads to the conclusion that there are areas of the brain where awareness takes place.
Our model reflects this structure. It is possible to postulate a visual awareness area, model it with supporting specialist neural areas for colour and shape, close the loop on simulated "muscles" that drive an "eye" round a scene stored in the computer that hosts Magnus, and study the behaviour of this system.
The hypothesis that the visual awareness area can solve the problems of visualisation and reconstruction becomes practical objectives and a discussion focus for whether such an awareness area needs to exist at all.
As for the findings, in attempting to name and visualise objects never before seen, we discovered that associations took place between non-aware parts of the neural net, which are sensitive to things such as colour or shape, and their verbal descriptions. That is, such areas could be activated entirely by words. But the visualisation of an object happens in the action of the "awareness" net.
While interesting awareness effects can be demonstrated, even artificial systems remain to be fully understood. We now aim to look at neurocomputational models of emotion.
But is it scary? Give Magnus awareness and emotion and will it want to establish its own rights? Not possible. Magnus is a non-living object whose best use is to tell us about the mechanisms of living ones. What is scary is the argument that issues of awareness and consciousness cannot be reached by simulation. Some even say it is the mystery of these phenomena that makes them worthwhile. This merely creates obstacles to the discoveries available through digital neurocomputation.
Igor Aleksander, professor of electrical engineering, Imperial College, London.