University of Washington (UW) researchers Andrew Meltzoff, Rechele Brooks, and Rajesh Rao recently published an interesting article in the October/November 2010 issue of the journal Neural Networks that examined social-communicative interaction that promote gaze following in 18-month-old human toddlers. There was a robotics component to the research: Fujitsu Laboratories’ 25-degrees-of-freedom HOAP-2 humanoid robot played a key role in the study. Even if the research had been carried out without the use of a ‘bot, the results are fascinating.
For those companies targeting the research robotics market, the study provides an example of a class of academic study that lies beyond the realm of robotics research, or even non-robotic research in areas where robotics technologies are often employed as a study tool-including robotics as assistive technology for the elderly and handicapped, or as teaching aids for children, including those with autism spectrum disorder. But more specifically, humanoid robotics technology was central to the study, and therefore the results, along with the common sense insights that can be logically extrapolated from them, can be useful to developers of humanoid robots, or any robot that is to interact socially with humans.
The research itself provides insights on one of the ways babies determine whether on object is a perceiving, sentient, social being (or not). For the study, 64 babies (half male, half female) were split into four experimental and control groups and then tested individually. Results were captured on video. In the first phase, those babies in the “social interaction” experimental group witnessed a UW researcher interact with the robot as it would with a human child asking it questions such as “Where is your tummy?”, “Where is your head?”, and “Do you want to play?”
The robot, which was remotely controlled by an unseen researcher, would respond appropriately, touching its torso or head, and nodding, respectively. In the second study group, the robot moved, but the researcher was passive, while in the third group the researcher and robot moved and communicated, but not in synchrony. In the fourth group (the passive robot baseline), the researcher executed the same actions as the social interaction group while the robot remained passive. It is important to note that at no time did the robot interact directly with the child.
For the second phase of the study, the researcher left the room. The robot then beeped and moved its head just enough to draw the babies’ attention. The robot then turned its head toward one of two toys placed near the baby and its mother. For each baby, the researches captured whether the child turned to look at the same object (gaze following). Not surprisingly, 13 out of 16 of the babies in the social interaction group followed the robot’s gaze. Those babies in the passive robot baseline group exhibited gaze following in only 3 out of 16 trials. In the other two study groups, gaze following was reported in approximately 50 percent of the cases.
According to Meltzoff, Brooks, and Rao, “The results show that infants’ likelihood of following the ‘gaze’ of a robot is influenced by their prior experience. Infants who see the robot act in a social-communicative fashion are more likely to follow its line of regard to an external target. Infants acquired information about the robot simply by observing its movements and interactions with the adult.” There is little to disagree with here. That comes later.
Among the gaze-following research community, the study, while not seminal, is critical. Basically, the research shows that social interaction plays a key role in encouraging gaze following in babies. But the study’s authors also note that “This finding has implications for the future design of humanoid robots and for the field of social robotics in general. Seamless human-robot interaction requires not only that the robot be able to follow a human partner’s gaze, but also that the human is motivated to follow the robot’s.” Exactly correct. But what would motivate a human to follow a robot’s gaze in the absence of seeing someone else interact with the robot in the first place?
Additional studies could provide some clues, but short of that you can go with common sense. Random, general questions and conversation (small talk) on the part of the robot would be a good place to start, while context-sensitive, verbal interaction would be even better. Directed movement, including looking at, recognizing, and following humans in close approximation to the robot would also imply sentience, as would mimicry of common human behavior such as yawning or stretching, or even random motion (moving hands or fingers, or shifting of the body).
I also believe that a humanoid form factor goes a long way in connoting humanness and thus the ability to feel, think, and perceive. The authors of the gaze-following study, however, go to great lengths to downplay the importance of both movement on the part of the robot, along with its humanoid form factor noting that “It is not just that infants treated the robot’s movement as a visually salient event. Had they done so, they may have watched the rotating cuboid atop the metallic body (the robot’s ”head”) but not have extended their look to the distal target. The infants did not just track a corner of the moving cuboid, but directed their gaze to a distal target more than a meter away.” They also correctly point out that while the robot was humanoid in form, the exact same robot was used in all of the tests, but the responses of the babies varied.
I understand the need for researchers to eliminate variables in their studies, but in reading the article I had the sense that they were protesting just a bit too much regarding the robot’s movement and the form factor. When in the last paragraph of the study the robot was described as “the heap of metal” and “the metallic entity,” it was clear that the researches were overreaching. What if the robot was not humanoid at all and was genuinely a “heap of metal?” You do not need two arms and two legs to “touch your tummy,” or two eyes to gaze. You do not have to be bilaterally symmetrical to have a head. While it is true that the gaze-following results varied even though the same robot was used, I wonder if the same results, or even more significant differences between the groups, would have been recorded if the “social-communicative entity” was not humanoid at all?
Humanoid morphological characteristics have been imprinted on all of us from the day we are born, and that imprinting presumes a number of fundamental behavioral traits. At some very basic level, and beginning at a very early age, we understand that “humanoids” capable of autonomous movement connote a degree of perception and sentience. They are “psychological agents” in the parlance of the UW gaze-following researchers. All humans come to this understanding instinctively. That includes 18-month-old babies. After all, babies are humans too.