Robots and neuroscience
Many people expect that humans and robots will interact more frequently in the near future. For this reason, it is extremely important that robots are capable of smooth and natural movements so that they do not make people feel uncomfortable.
Dr Thierry Chaminade from the Wellcome Trust Centre for Neuroimaging is part of an international research group that last month published a paper on the human brain’s response to humanoid robots. In the study, the researchers scanned the brains of volunteers with functional magnetic resonance imaging (fMRI) while they watched video clips of people and a humanoid robot expressing the same emotions.
I spoke to Dr Chaminade about the paper, and the worlds of robotics and neuroimaging.
Are robots often used in neuroscience?
There hasn’t been a lot of work in this area. Our paper is one of the first studies that tries to really look at how we perceive robots, with the aim of understanding social cognition. For that reason it was a tough challenge to get it published, because it was not deemed as relevant by either roboticists or neuroscientists. Roboticists are not interested in the brain and neuroscientists don’t care about robotics. We had a hard time explaining why this research is interesting to both groups.
What was the basis for this study?
The uncanny valley says that if you make a robot too human it becomes superfluous. This made me question – what happens if it does not really look like a human? We were trying to figure out how the perceptual system deals with these human-like but imperfect appearances.
As we expected, the system that we use to perceive human actions is simply not activated by the robot stimuli. However, we discovered that we can actually cheat the system by telling it to perceive what the robot is doing as an emotion.
When we asked the subjects to judge the robot using human names for emotions, for example ‘How happy was the robot?’ or ‘How disgusted was the robot?’ this actually primed the system to be more responsive to the robot. This was quite unexpected, and may provide interesting hypotheses for future work (watch a video of the experiments here (MP4 0.43 MB)).
If we tell people that the robot is portraying human emotions, or maybe that it is being controlled by a human, people may be more likely to resonate with the robot than if we show it to them without giving them any information. That is quite interesting, because it is something that will be quite important in the future when more robots are being made – the way you present your robot can make a huge difference in the way that people perceive it.
In the paper, you mention social care for the elderly and cognitive therapy as possible applications for robotics.
There are several robotic animals, such as a baby seal, that have been developed as companions for the elderly. A lot of people are working on robots for cognitive therapy as well. These robots could be very useful in rehabilitation therapies for children with autism in particular. Some researchers suggest that children with autism are missing some stages in their social learning because they often avoid contact with other people.
One of the hypotheses is that they avoid contact because they experience a repulsion for some things in humans that we haven’t identified. For example, it is often noticed that they avoid eye contact. Others may be afraid of the complexity of human cognition, so they cannot really read it.
Researchers have observed, however, that many of these children and young adults are keen on new technologies and can often communicate better through online messaging tools than with a real person, face-to-face. There have also been several observations that children with autism may be attracted to features in a robot that they will avoid in other people. Taking the earlier example of the eyes, they will often not make eye contact or follow the gaze of a person, but there have been cases where they approach a robot, touching its eyes, trying to figure out what the robot is looking at.
All of these things support the hypothesis that robots could be used as learning tools for these kids, for things that they have more difficulty learning from a person like other people do. We could help them to learn things that are important for normal social interactions, such as gaze following, pointing to objects, or trying to understand mental states or emotions of other people. It’s one of the promising direct applications of humanoid robotics in everyday life.
The study involved researchers from all over the world. How was the work divided between you?
We had three groups. The Japanese team, including Massimiliano Zecca and Atsuo Takanishi, actually built the robot.
The Italian team were the source of the idea; they have done other work on mirror neurons – brain cells that you use when you perceive other people’s actions in a process called motor resonance. Our original question was: do you use motor resonance when you see non-human actions? Simply put, are mirror neurons responsive to humanoid robots?
The UK-based group, including me, did the fMRI and all of the associated work to investigate this question. To tell you the truth there are some co-authors of the paper I have not met in person yet!
What do you think is the future of robotic technology?
The direction that robotics will take depends on what is funded. There is a highly problematic issue in this respect – what use can we make of these robots? It’s really an issue in the short-term, people have problems getting funded in robotics simply because there are no clear applications to building humanoids. So then the question becomes more about the long-term use of these robots, which is not entirely clear.
Those things being said, other advances are being made as a direct consequence of humanoid technology. Things like exoskeletons – frameworks developed with robot technology that can be attached to the body – are useful for people with hemiplegia (where the limbs on one side of the body have a severe weakness). There are even some exoskeletons being developed to help people to walk again, or to use their arms again. It’s not a direct application of humanoid robots, but it uses technology that came from the development of complex robots.
- Chaminade, T., Zecca, M., Blakemore, S., Takanishi, A., Frith, C., Micera, S., Dario, P., Rizzolatti, G., Gallese, V., & Umiltà, M. (2010). Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures PLoS ONE, 5 (7) DOI: 10.1371/journal.pone.0011577
Ailbhe Goodbody is undertaking a work experience placement at the Wellcome Trust.