From empathy to revulsion over a machine?
Some four decades ago, Masahiro Mori, who was then a robotics professor at the Tokyo Institute of Technology, wrote an essay describing how he thought people would react to robots that looked and acted almost like humans.
Mori concluded that a person’s response to a human-like robot would quickly drop from empathy to revulsion as the machine approached, but ultimately failed to attain, an authentic lifelike appearance.
Undeterred by Mori’s supposition, described as “the uncanny valley,” researchers today are developing technologies that aim to help robots look, gesture, walk and even sense things as people do. It’s a trend that could soon bring the uncanny valley into disturbing reality.
Robots with human faces
Does that robot look familiar? It ought to, say the engineers at the Institute for Cognitive Systems (ICS) at Technische Universitat Munchen. Working with Japanese colleagues, the ICS researchers have developed a “Mask-Bot” that they hope will someday revolutionize human-robot interactions.
“Mask-Bot will influence the way in which we humans communicate with robots in the future,” predicts Prof. Gordon Cheng, head of the ICS team.
Mask-Bot displays realistic three-dimensional animated faces on a transparent plastic mask. A projector placed behind the mask beams a representation of a human face onto the mask’s inner surface, creating realistic features that can be view from an almost infinite number of angles. Mask-Bot is bright enough to operate in full daylight, thanks to a powerful and small projector as well as a coating of luminous paint sprayed onto the inner plastic mask.

To faithfully replicate facial expressions, ICS researcher Takaaki Kuratate developed an animation engine that filters an extensive series of face motion data from people collected by a motion capture system. As the Mask-Bot operates, the engine selects the facial expressions that best match specific sounds as they are being spoken.
Emotion synthesis software supplies visible nuances to Mask-Bot to indicate various types of emotion, such as happy, sad or angry.
Mask-Bot can realistically reproduce content typed via a keyboard. A text-to-speech system converts text to audio signals, producing a female or male voice, which can then set to quiet or loud, happy or sad, at the touch of a button.
ICS researchers are already working on the next generation Mask-Bot. Mask-Bot 2 will feature a mask, projector and computer control system mounted inside a mobile robot. “These systems could soon be used as companions for older people who spend a lot of time on their own,” Kuratate observes. (Unless the robot creates feelings of revulsion, of course.)
Robotic body language
Whenever people communicate, the way they move often has as much to do with what they’re saying as the words that come out of their mouths. Can robots also be designed to use non-verbal communication to interact more naturally with humans?
Researchers at the Georgia Institute of Technology believe that it’s a good idea to create robots that move like people rather than… well, robots. The scientists have discovered that when robots move in a human-like way, with one movement flowing smoothly into the next, people are more likely to understand what the robot is doing.
“It’s important to build robots that meet people’s social expectations because we think that will make it easier for people to understand how to approach them and how to interact with them,” says Andrea Thomaz, an assistant professor at the Georgia Tech School of Interactive Computing.
Thomaz, working with doctoral student Michael Gielniak, conducted a study in which they examined how easily people can recognize what a robot is doing by simply observing its movements.
“Robot motion is typically characterized by jerky movements, with a lot of stops and starts, unlike human movement which is more fluid and dynamic,” Gielniak says. “We want humans to interact with robots just as they might interact with other humans, so that it’s intuitive.”
Using a series of human movements retrieved in a motion-capture lab, they programmed a robot called Simon to perform the same movements. The researchers also optimized the motions to allow more joints to move at the same time and for the movements to flow into each other.
The team than asked their human subjects to watch Simon and identify the robot’s movements. “When the motion was more human-like, human beings were able to watch the motion and perceive what the robot was doing more easily,” Gielniak says.
The research being conducted by Thomaz and Gielniak is part of a larger project that’s designed to make robots move more like humans. In the years ahead, the pair plan to enable Simon to perform the same movements in various ways.
“So, instead of having the robot move the exact same way every single time you want the robot to perform a similar action, like waving, you always want to see a different wave so that people forget that this is a robot they’re interacting with,” Gielniak says.
Touch response
At Technische Universitat Munchen, another ICS researcher team is working to add touch response capabilities to robots, hoping to allow new systems to respond to taps and other physical sensory cues.
As with human skin, the way the artificial skin is touched means a lot. A tap could, for example, lead to a spontaneous retreat (when the robot hits an object) or cause the machine to use its eyes for the first time to search for the source of contact.
“In contrast to the tactile information provided by the skin, the sense of sight is limited because objects can be hidden,” says Philip Mittendorfer, an ICS scientist researching artificial skin technologies.
The centerpiece of the research is a set of five-square-centimeter circuit boards, which the scientists have mounted on a prototype robot. Each circuit board contains four infrared sensors that detect anything closer than one centimeter. “We thus simulate light touch,” Mittendorfer says. “This corresponds to our sense of the fine hairs on our skin being gently stroked.”
The prototype also incorporates six temperature sensors and an accelerometer. These components allow the system to accurately register the movement of individual limbs and to learn which body parts it has just moved.
“We try to pack many different sensory modalities into the smallest of spaces,” Mittendorfer says. “In addition, it is easy to expand the circuit boards to later include other sensors, for example, pressure.”
A second-generation prototype is next on the team’s agenda. “We will … generate a prototype which is completely enclosed with these sensors and can interact anew with its environment,” says Mittendorfer’s supervisor Gordon Cheng. Cheng adds that the next model will be “a machine that notices when you tap it on the back… even in the dark.”
Walk like a human
A group of U.S, researchers has produced a robotic set of legs that they believe is the first to fully model walking in a biologically accurate manner.
Neural architecture, the musculoskeletal architecture and sensory feedback pathways in humans, have been simplified by the University of Arizona researchers and built into the robot, giving it a remarkably human-like walking gait.
A key component in the human walking system is the central pattern generator (CPG). The CPG is a neural network in the lumbar region of the spinal cord that generates rhythmic muscle signals.
The CPG produces, and then controls, these signals by gathering information from different parts of the body that are responding to the environment. This is what allows people to walk without needing to think about walking.
The simplest form of a CPG is called a “half-center,” which consists of just two neurons that fire signals alternatively, producing a rhythm.
The robot contains an artificial half-center as well as sensors that deliver information back to the half-center, including load sensors that sense force in the limb when the leg is pressed against a stepping surface.
“Interestingly, we were able to produce a walking gait, without balance, which mimicked human walking with only a simple half-center controlling the hips and a set of reflex responses controlling the lower limb,” says study co-author Theresa Klein.
Entering the Uncanny Valley
As researchers continue developing technologies designed to give robots human-like attributes, they move increasingly closer to entering the uncanny valley. Will people ultimate rebel against these almost-human machines, forcing scientists to take other approaches to human-robot interaction? At this point, nobody really knows.
Yet given the rapid advancement of robot interface technologies, there’s a good chance the answer will become crystal clear in the not–too-distant future.
For those who like stories of origin:
How Robotics Master Masahiro Mori Dreamed Up the ?Uncanny Valley?
And for those who disagree:
The Truth about Robots and the Uncanny Valley: Analysis
And what of the Uncanny Valley ?Within?? Watch:Quantic Dream The Birth of Kara