Contrasting approaches to simulated mutations and neural networks are transforming artificial intelligence
EDINBURGH University's mobile robot group believes hands-on experience is essential in the design of "intelligent" machines capable of playing rugby and sumo wrestling. Its 15 PhD and MSc students, who form the largest research group in the university's department of artificial intelligence, are expected to build the robots themselves, designing electronics systems and computer-powered brains for a specially constructed chassis.
"Computer simulation would be easier, but you don't get to grips with real problems," says lecturer Gillian Hayes. "For example, if you have a wheel at either side, you might programme the robot to go in a straight line, but it veers off to one side because the gears are wearing out at different rates."
The mobile robots, fitted with infrared sensors similar to those used in burglar alarms, already have a multitude of skills. They can play rugby, grasping a ball, running with it and dropping it over a line, while tackling any other robot which happens to get in their way. They have also indulged in some sumo wrestling, trying to push one another out of an arena.
"This allows you to find good strategies for getting the robots to move around without destroying themselves or each other, testing out their sensors in lots of different situations," says Dr Hayes.
But the robots' abilities have now taken a dramatic new turn, in the development of communication skills. The machines can learn from one another, rather than being programmed to perform particular moves.
Aude Billard, one of Dr Hayes's PhD students, has been carrying forward work on learning by imitation, in which a student robot learns words by following its teacher.
Dr Hayes said: "There are not many robotics groups working on language. Practically speaking, it would be useful to have robots agree a vocabulary between themselves in terms of their own perception, because maybe they will be able to distinguish between situations when we can't. It is quite difficult to try to put yourself behind robot sensors and figure out what the world looks like to them."
Ms Billard's student and teacher robots each have sensors allowing them to avoid obstacles, light detectors enabling them to detect the other robot, and a radio transceiver allowing them to "talk", the "words" being radio signals for "stop", "move", and the four points of the compass.
The student robot is programmed to track its teacher, which is in turn programmed to wander randomly. Whenever the teacher detects the student behind it, it sends a radio signal telling the student to stop, combined with the compass point.
It then starts to move again, telling the student to follow it. In another experiment, the student follows the teacher up and down a hill.
"When the teacher transmits a particular signal, the learner then associates it with what it's doing and seeing at the time, whether it's moving, whether it's stopped, whether it sees a light," says Ms Billard. The learning is carried out by an artificial neural network. And since the robot "learns" by the consistent association between the radio signals and what its own sensors are sensing, it need not be the same type of robot as its teacher.
"It can have a different body shape and different sets of sensors. It learns in terms of its own perceptions," says Dr Hayes.
Ms Billard says the student learns the vocabulary in half an hour, itself saying the relevant "word" when in a particular situation. A simple vocabulary could be all that is necessary for a robot vacuum cleaner or wheelchair, avoiding the need for complex grammar or syntax which would require far more computational power and programming work.
"Since the word association is based on sensory input, you could use any language, German, English, or even just a sound," Ms Billard says.
Research papers relating to the main stories on this page can be found on the following sites: