At the March SXSW, Hanson Robotics introduced Sophia, a female humanoid that has 62 facial expressions and responds to questions. Sophia moved as if she were real, a humanoid more life-like than ever.
Does a humanoid like Sophia threaten our individuality, our identity? How will we be able to tell where the human line starts and ends?
Historically, the Uncanny Valley theory posited how humans would have behaved towards robots in human environments. Though robot designers like Hanson Robotics, Kokoro, and Singapore Nanyang University have moved stealthily ahead despite these reservations, the question of how they will be accepted in various environments remains a major issue. Many are wondering exactly when do robots cross the line into humanness and what does it mean?
A recent study in the International Journal of Social Robotics, “Blurring Human-Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness” adds another element to this ongoing discussion. The research team of Francesco Ferrari, Maria Paola Paladino and Jolanda Jetten conducted two studies where they showed photos of robots ranging from mechanical to android to humanoid to participants. They then gave them a questionnaire where they ranked their feelings about the robots. Both studies found that the android robots raised the highest concerns of threat.
What distinguishes the study from the Uncanny Valley was the use of the Threat of Human Distinctiveness Hypothesis. This theory seeks to categorize the point in which a robot’s close resemblance to humans may threaten our own distinctiveness and identity. The paper’s authors believe this theory “contributes to a better understanding of why people fear the impact of social robots on human identity,” thereby identifying those robots most threatening to humans.
This raises several questions. One, whether or not these robots becoming part of our daily lives would undermine what it means to be a human. Secondly, if they look like us, will they be able to interact in our world without being detected as an impostor. Looking at humanoids and androids like Sophia, Nadine, Actroid Sit and others, the boundaries seem to blur.
But according to Jetten, this boundary is crossed when robots threaten what makes us unique, our human identity. As a result, people become anxious when that identity is eroded. So the threat lies in the fact that they could dilute our human identity by passing as humans. People may be more accepting if the robot is not too similar in appearance to us, says Jetten.
As these discussions progress, we continue to see new design materials like ecole polytechnique federale de Lausanne’s (EPFL) soft sensors and actuators. EPFL’s Jamie Paik said their adaptability, material and sensitivity make them appropriate for daily living activities like those encountered by personal assistant robots. “Having an intuitive and adaptive interface that causes minimal risk in utilization, light, and customization make these sensors and actuators very attractive,” said Paik.
Further, with artificial intelligence for speech, facial recognition and self-awareness, robots can now answer questions and emote expressions and speech more clearly than ever. The line between humans and humanoid seems to become blurrier. But you really can’t tell it when you see humans actually meet robots.
When androids and humanoids are introduced to humans, humans are tickled. They smile and interact with the robot as if it was real. They even laughed when Sophia said she wanted to destroy humans. No one seems threatened in their identity as humans. So is actual contact better when determining threat potential?
Unfortunately, most studies haven’t used videos or actual contact with humanoids to determine the actual threat, so we don’t know. Even in Jetten’s study, participants were shown photographs of robots, but Jetten admits video or contact with these robots could decrease our fear of them. Still, the awkward way androids move is very different from humans and this should lower the distinctiveness threats, said Jetten.
“It may be tempting to make robots that do look like us and technological advances will make this easier and easier to do in the future, but such robots will only arouse distinctiveness threat,” said Jetten. “I don’t think that we should design robots and see what happens afterwards. It would be much better to understand these processes before developing them so that the robots of the future are optimally designed in a way that robots make people’s lives better.”