Are friends electric?

Could robots offer our ageing populations care in their dotage, even love? Can machines genuinely become social beings? Will androids one day dream of electric sheep? Kathleen Richardson examines the history and development of the robot and evaluates the possibilities

June 9, 2011



Credit: Alamy


"How often do I have to repeat this argument? There are not enough trained, affordable and available people!" So says Maja Mataric, a robotics professor at the University of Southern California, when explaining the rationale for building robots to assist humanity socially in the future.

"If every single young person 20 years from now takes care of an old person, there's still not enough people to look after all of the old, not to mention those with lifelong developmental disorders. There are just not enough people. This is not about people versus robots - it's never about people versus robots."

According to the UK's Office for National Statistics, by 2034 the number of Britons aged 85 and over is projected to more than double to 3.5 million and account for 5 per cent of the total population. Meanwhile, the proportion of young people in the UK is falling steadily, with projections suggesting that under-16s will account for 18 per cent of the population by the same date.

In Japan, a predicted demographic time bomb has prompted both businesses and the state to invest millions of yen in the robotics industry, with mechanical solutions to coping with its growing elderly population preferred to non-Japanese carers. Indeed, robots are second only to one's own children as preferred providers of old-age support in Japan.

What seems to underscore Japan's fascination with robots is not positive social demand, but a government- and industry-led programme that is essentially anti-immigration in intent.

According to Jennifer Robertson, professor of anthropology and the history of art at the University of Michigan, Ann Arbor, "robotics in Japan is both pro-natalist and anti-immigration. The Japanese government does not see increased immigration as a desirable solution to the labour crisis caused by a declining birth rate and an ageing population. There is the concomitant hope that humanoid robots will free women from domestic and caring duties so that they will be more willing to reproduce."

This preference for automata over people is reminiscent of the first imaginings of the robot. Although the concept of artificial life has an ancient lineage, the robot makes its cultural debut in the 1921 play R.U.R., by the Czech writer Karel Capek (see box, below).

R.U.R. (which stands for "Rossum's Universal Robots") is about the manufacture of the machines and is set in an imaginary world where robots do all the work, leaving people uncertain of their place in society.

Whereas apek's play critiqued contemporary politics' obsession with labour and production (indeed, the word "robot" derives from the Czech for "work"), today the vision of the robot is of a different sort: rather than a working drone, there is a greater emphasis on its being "social", "interactive" and "socially assistive". We are not talking about the robot as a mediating entity, but as the object of a social relation. This is a significant shift.

A look at artist Jasia Reichardt's fascinating book Robots: Fact, Fiction, and Prediction (1978) reveals that humanoid robots are nothing new. Attempts to build human-like robots date back to at least the late 1920s (although the concept is older still), when Captain W.H. Richards demonstrated "Eric" at the annual exhibition of the Model Engineers' Society in London. Eric could stand, bow and turn its head from side to side.

Despite ambitious earlier efforts to build robots, they were not used by industry until 1961, when the Unimate robot arm was first used by General Motors in the US.

Twentieth-century (non-fiction) robots were largely conceived as working devices, either in the home or the workplace. In some quarters of society, the systematic reduction of labour through mechanisation was a desirable aim for the modern world. One vision was connected with women and domesticity: 1970s socialist feminism envisaged mechanical replacements for women in the home, freeing them up to work or engage in more leisure activities. In the late 1970s, for instance, Quasar Industries in New Jersey announced the launch of a household robot that, it was claimed, could mop floors, mow lawns and carry out simple cooking tasks.

Whereas these earlier robots were intimately connected to inspiring visions of how technology would improve people's lives, contemporary robotics is galvanised by a sense of crisis in how to cope with ageing populations and people exhibiting congenital or late-onset developmental disabilities, combined with a shift in reimagining the nature of being and the boundaries between humans and machines.

The emergence of "social" robots at the end of the 20th century represented a departure in the way machines are considered. Such robots are imagined to act in place of people in particular social roles: as companions, therapists, nurses, nannies...even sexual partners.

To make this vision a reality, roboticists are placing greater emphasis on the physical appearance of machines. If robots are to be social, they must perform in "social" ways - that is, they must have faces and bodies that resemble people's, or provide a convincing simulacrum.

For example, Honda's Asimo robot looks like a small boy dressed in a space suit. Asimo's face is largely featureless, but its body can be used to express thought, intention and emotion. It can wave and dance and it has an automated voice. Asimo can do none of these things independently, but its abilities still seem very impressive.

Some social robots have bodies and are fully humanoid; others have body parts built on blocks or wheeled bases, or attached to specifically created structures (designing legs that work is incredibly complex). Some social robots are not humanoid at all, such as the seal robot, Paro, a robotic companion for the elderly used in Japan, Denmark and Germany.

So, roboticists are self-consciously striving to build new robotic companions. One example of this goal is expressed by the European Commission's Living with Robots and Interactive Companions (LIREC) project. The group expresses its aims thus: "The challenge: building long-term relationships with artificial companions. How do we create a new computer technology that supports long-term relationships between humans and synthetic companions? To date, artificial companions have had limited abilities to support long-term, meaningful social interactions with users in real social settings."

It continues: "The LIREC network aims to create a new generation of interactive, emotionally intelligent companions that is capable of long-term relationships with humans."

Why would anyone want to build a machine that is capable of "long-term relationships with humans" as a goal in itself?

Kateryna Maltseva, a cognitive anthropologist at the University of Connecticut who researches social norms and the evolution of cooperation, says: "Given the peculiarities of human social perception, and taking into consideration the never-diminishing human need for companionship, when it comes to evaluating the potential of robotic companions, the cost-benefit analysis seems to be slightly in favour of making this exploratory effort. On a personal note, the idea is so lovely: there would never be lonely people."

The interest in relational machines actually featured in a different area of engineering in the 1960s: computing. Joseph Weizenbaum launched the computer program Eliza in 1966 as a therapeutic aid. Eliza worked by generating questions in response to questions.

The ease with which (some) people would interact with Eliza led Weizenbaum to assess his own morality as a computer scientist and to reflect on therapy, writing: "What can the psychiatrist's image of his patient be when he sees himself, as therapist, not as an engaged human being acting as a healer, but as an information processor, following rules, etc?"

Weizenbaum sounded a warning about the role of both therapist and patient in a computer-mediated system.

In other ways, too, artefacts are deliberately designed to evoke a sense of intelligence and sentiment that aims to encourage users to have direct relationships with machines. One such relational artefact is the digital pet, tamagotchi, created in 1996.

The tamagotchi is an egg-sized virtual pet that the user must "feed" and tend to keep it alive. If the user does not follow commands from the tamagotchi, the virtual creature "dies".

Tamagotchis revealed the extent to which users felt a responsibility to keep their machine "pets" virtually alive. I have seen the evidence for myself, with adults and children becoming very attached to them.

Although the tamagotchi's popularity in the late 1990s might be seen as a craze, that does not explain the shift that people seem to feel more generally in relation to machines. The Japanese are often characterised as "robot-mad", but I have found a liking for the machines, particularly cute ones, among adults and children of all ages in the UK and the US, too.

Another interesting feature of social robots is the question of who is doing the socialising. The now discontinued Aibo robotic dog, designed by Sony, was one of the first popular robots whose programming allowed it to recognise the face of its owner. But it also featured mechanisms that allowed each Aibo to send and receive signals from other Aibos within a certain range and respond by playing or barking.

By programming this kind of action, Aibo dogs appeared to be able to recognise and socially interact with others of their kind, showing that their "social" needs could be met by humans and other machines.

But could roles that require sociality (that is, communicative interaction, social collaboration or physical touch) be carried out by machines?

It depends in part on what model of sociality you use. Some roboticists I have interviewed view social interaction as a set of behaviours that are performed in certain sequences, and such "scripts" can then be programmed into robotic devices.

According to Rodney Brooks, emeritus professor of robotics at the Massachusetts Institute of Technology, social behaviour is about what you see on the surface: if a robot is behaving in a particular way, for example smiling or frowning, then people who interact with it will ascribe to it feelings, intentions and emotional states.

But is it enough to design a machine that looks and behaves as if it were social? Leslie C. Aiello, president of the Wenner-Gren Foundation for Anthropological Research and emeritus professor of biological anthropology at University College London, is sceptical. After all, dialogue, communication and social interaction are not one-way streets.

"Even if you could design robots to behave 'socially' correctly, giving the right social cues at the right moment, there would still be a question of 'honesty'," she says. "People give and take in relationships: it is not just a matter of performing the correct social cues in response to a social interaction."

While some robots are imagined as intermediaries, such as those developed as therapeutic assistants for use with children exhibiting autism-spectrum conditions (see box, below), or the robot nannies in Japan that allow parents to monitor their children remotely from the office, others are being developed as viable alternatives to human companions.

Animal behaviouralists routinely accept sociality in relation to the animal world, particularly among primates and domesticated animals such as dogs. Social anthropologists have long been interested in how people, animals and things relate, and how objects can take on the properties of people, even stand in for them.

Artefacts can represent people and people can become "distributed" in their environment through artefacts: I "distribute" my thoughts via this article, for example. In Euro-American societies, it has been widely observed how people form all kinds of relationships with their mechanical objects, such as cars. Human sociality is a complex phenomenon.

There is also a huge volume of literature about how humans and computers interact. This field of research is dedicated to understanding the subtle interrelations involved. For example, the growth of gaming culture is explored at length in Sherry Turkle's The Second Self: Computers and the Human Spirit (first published in 1984).

Turkle writes of how teenagers often imagine their online or gaming interactions as they might their "real-world" ones. Today, characters in video games, avatars and virtual agents are intimately connected to the people who create them and as such have a peculiar kind of life.

Is it simply an old-fashioned idea that robots should do our work but not be our friends? Are humans simply too different from the automata they can create?

Brooks views human beings as machines, albeit complex ones. A mechanistic view of our species as pre-programmed or carrying out innate instructions is a central model in science and engineering, and it has often been left to the social sciences to present an alternative vision.

Social science has taken an anti-essentialist turn, but by doing so has diminished the position of the human in social relations. Anti-essentialism takes the view that nothing (human, animal or thing) has an "essential" quality. A popular way of theorising about the relations between humans and non-humans is expressed in the actor-network theory (ANT). This posits that when a human agent interacts with a thing, the person is not the dominant character in the relationship: rather, the thing is just as important in the exchange.

Whereas some roboticists invoke one kind of totality (all are machines), some who subscribe to radical anti-essentialist theorising take an opposing but equally fundamentalist view (all are agents).

As John Law, professor of sociology at The Open University and a proponent of ANT, explains: "The theory is a semiotic machine for waging war on essential differences. It has insisted on the performative character of relations and the objects constituted in them."

Another key thinker in the ANT school, Bruno Latour, professor of sociology at Sciences Po in Paris, is interested in extending the concept of the social by presenting the relational point as one of "symmetry". He writes that "symmetry is defined by what is conserved through transformations. In the symmetry between humans and non-humans, I keep constant the series of competencies, of properties, that agents are able to swap by overlapping with one another." Entities - human and non-human - derive meanings in relation to each other.

So should the "social" be broadened to include animals and things?

Harry Collins, professor of sociology at Cardiff University's School of Social Sciences and a prominent critic of ANT, says: "We should not be extending it in the first place. It is based on a very simple fallacy. It is the case that dogs, cats, cars, rocks, doors, my blood, my heart, the Sun, the air, etc are part of my life. Without them, my life, and that includes my social life, would be different.

"But I am not part of their social life. This is because they don't have language (with the possible marginal exception of chimps and dolphins, but that does not affect the argument).

"People, and that includes all ANT enthusiasts, are failing to use the word 'social' properly. 'I am emotionally affected by my car/dog etc...therefore my car/dog etc share my emotional life and are emotional creatures.'"

In Collins' view, language is crucial to sociality, yet this view has been challenged. Just as there are many different definitions of "social" today, the argument that language is a defining feature of human uniqueness is complicated by the fact that there are so many competing definitions of what constitutes language: written, spoken and non-verbal. Some higher apes are said to have language, although studies claiming this are controversial.

Social roboticists are trying to develop robotic language, and many draw on non-verbal body language as a communication tool for their robots. When "chatbots" - conversational machines - compete each year for the Loebner Prize for artificial intelligence, they take part in a Turing test, named after the mathematician Alan Turing.

In a 1950 paper, Turing argued that if, during a text-based conversation, a human could not tell whether the responses they received were from a human or from a machine, this would show that machines could "think" - the "thinking" being the ability to fool the person they are conversing with.

"Anthropologists and animal behaviouralists have extended the social to animals for years," says Aiello. "What is interesting to me is extending the social to machines or to inanimate objects - that is the real issue.

"I keep thinking that if the large human brain evolved to avoid deception, is it going to perceive robots as deceiving it? For this reason, I never really buy into the sociality of machines as I might buy into social contact with animals."

Aiello is right that it is not easy to convince laypeople that they are engaging in genuine interpersonal exchanges when interacting with robots. Roboticists have a difficult job because the more human-like their work appears, the more people expect it to possess higher degrees of intelligence.

This has led some roboticists to make their robots very cute or child-like - or obviously mechanical - to avoid what they call the "uncanny valley". Indeed, if you visit robotics labs in the US, the UK and elsewhere in Europe, they often seem like nurseries because robots are so frequently modelled to look like children.

The uncanny valley is the fear triggered when there is uncertainty about what something is: for instance, a robot that looks very human-like, but does not behave as such. This returns us to Aiello's point about the "honesty" of mechanical behaviour.

The more human-like a robot looks, the more people expect it to behave like a human. When it does not, the results can be quite creepy. Consider the University of Hertfordshire's robot Kaspar: adults tend to think it is scary and many feel it looks too human-like.

Interestingly, the more mechanical a robot appears, the more people tend to warm to it. Mechanical ones appear to be more popular than the fleshy-looking variety. I have seen people alter their social behaviour (such as language exchange and eye contact) when interacting with automata. They work extremely hard to make themselves understood by humanoid robots. Rather than a relation of symmetry, asymmetry emerges, with people often compensating for the machines' lack of social behaviour.

Sociality is at the heart of the interface between technology and social science. As a subject, it is important to ask: can the concept be extended to include everything, or should there be boundaries drawn to decide what can and cannot be social? Perhaps figuring this out is also a means of rethinking the importance of humans as agents in social affairs.

Robot revolution: Age of the machine left creator in despair

This year marks the 90th anniversary of Karel Capek's play R.U.R. ("Rossum's Universal Robots").

"Rossum" is the name of the scientist who creates the robots, and also of his son, who develops the formula to build them. It is thought to derive from the Czech word "rozum", meaning "mind" or "reason".

The play, first performed in Prague in 1921, is a tale of a society where people are increasingly indolent, with all their work done by the Rossum robots. This leads to a crisis in human society, with the robots eventually rising up and annihilating humanity.

Capek was born in 1890 in a small village in the Krkonoe Mountains, in what is now the Czech Republic. Besides robots, he used other non-human devices to explore the politics of his day, including salamanders and newts.

It was his brother, Josef, who came up with the term "robot" to describe the working beings in the play. Robot is derived from the Slavic term "robota", which roughly means "serf labour".

It may seem strange to think that the first robot was a critical symbol of Modernism. Capek's play was a political commentary on the period: he was neither of the Left nor the Right and, according to literary critic Peter Kussi, "rejected collectivism of any type, but was just as opposed to selfish individualism. He was a passionate democrat and pluralist."

We imagine the robot as metallic, but it did not start out that way. In R.U.R., the robots are made of flesh and blood, but their organs, veins and capillaries are assembled on a factory production line.

Yet so powerful was the machine Modernism of the era, so celebrated the machinery of industrial production, that the robot character soon turned into a machine. Other artists took the play and effected the transformation.

The metamorphosis of his robot characters from humanoids to machines led Capek to despair.

Writing in the third person, he said: "It is with horror, frankly, that he rejects all responsibility for the idea that metal contraptions could ever replace human beings, and that by means of wires they could awaken something like life, love, or rebellion."

You've reached your article limit.

Register to continue

Registration is free and only takes a moment. Once registered you can read a total of 3 articles each month, plus:

  • Sign up for the editor's highlights
  • Receive World University Rankings news first
  • Get job alerts, shortlist jobs and save job searches
  • Participate in reader discussions and post comments
Register

Have your say

Log in or register to post comments