Socially assistive robots (SAR) have the goal of improving the health, wellness, communication, learning and autonomy of humans that interact with them. Combining methods from computer science and engineering, as well as cognitive science, social science and human studies evaluation, SAR researchers give robots the ability to assist in mitigating critical societal problems that require sustained personalized support to supplement the efforts of parents, caregivers, clinicians, and educators.
One of the pioneers in this field is Dr. Maja Mataric, the Chan Soon-Shiong Distinguished Professor in the Computer Science, Neuroscience Program, and the Department of Pediatrics at the University of Southern California. In addition, she is the founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab, and vice dean for research in the Viterbi School of Engineering.
Mataric received her Ph.D. in computer science and artificial intelligence from MIT in 1994, and her master’s in computer science from MIT in 1990. She also has a bachelor’s degree in computer science from the University of Kansas.
Joanne Pransky, associate editor for Industrial Robot Journal, recently spoke with Matric about the goals of socially assistive robots, the issues around consumer robots like Jibo and Kuri, and why we might need more imperfect robots.
The full interview is available free to Robotics Business Review readers until Sept. 30, 2019. Here is an excerpt:
Robots, faces and torsos
Q: Since a socially assistive robot (SAR) does not require physical interaction, even in a nonhumanoid form, do you think a SAR needs a torso, or is a “face”/head enough, and within the head, what features has your research found to be important to the user (e.g., eyes, mouth), etc.?
Mataric: We actually just published a review paper where we reviewed 65 studies and compared different aspects of embodiments – onscreen and physically embodied. Physical embodiment does not only mean physically interacting with the environment to perform tasks; embodiment also has to do with non-verbal communication, including proxemics (social use of space, such as how far people feel they should stand from others), oculesics (eye-gaze related non-verbal communication), posture and gestures, to enhance communication and the perception of being trustworthy, helpful and engaging.
Everything about the robot’s body communicates something at all points in time, even when the robot isn’t moving. What we found is that people typically ascribe a face or a head to a robot, even if there isn’t one. It’s just how we humans are wired, so therefore, a SAR needs to have those features, but they can have various forms, so there is room for creative design. However, users need to know if the robot is looking at them, and they want to know if the robot is sad, happy, surprised, confused, etc. Robots don’t have to have a torso specifically, but they have to have features that allow them to communicate effectively.
Q: Do you think social robots and devices should be redefined for the industry?
Mataric: I now see more and more intelligent assistants that are either not yet embodied or that have a physical embodiment somewhere, but they don’t emote in any way. If there is a bit of personality to the assistant, people tend to like it more. For example, Amazon is designing more into Alexa’s personality so we can be pleasantly surprised by Alexa’s answers. That makes Alexa more interesting to interact with, and gives it more agency.
The question that you are really asking is, do you need embodiment along with agency? Robot embodiment is fundamental to interaction and communication. It provides the ability to communicate in a more cross-cutting, natural and inclusive way that cannot be done only through speech. We are in an age of disembodied on-line communication, and in some realms, it is stripping away human empathy and our sense of real connectedness. Embodiment brings those properties back naturally, and that is one of the many reasons it is important.
Robot imperfection is good
Q: What robot behavior gave you as a scientist the most unexpected response?
Mataric: One of the things that I find really interesting and important for the way we’re developing technology today is the notion of perfection. Technologists are really obsessed with trying to create the perfect robot or agent that knows all the answers and behaves perfectly. However, that’s really not what people want from a companion (though it is what we want from a surgeon; so context is important). In everyday life, people do not accept function alone if they can’t engage with it in a social way, because we are social creatures.
One example where I learned this was about ten years ago from our humanoid robot, Bandit. Bandit, built by BlueSky Robotics, was quite sophisticated for those days and it’s still quite sophisticated today. It has a simple actuated mouth that sometimes did not work because of a servo motor problem. Sometimes the mouth “froze” so the robot was talking but the mouth was not moving. We decided to control the conditions and programmed Bandit to say at the beginning of an interaction with a user: “Sometimes my mouth just doesn’t work. I’m sorry about that.”
To our surprise, we discovered that people were so much more forgiving about everything Bandit did for the remainder of the interaction. Because Bandit apologized up front, users effectively gave it a break, and allowed it to be imperfect. In fact, they liked it that way, because it was more relatable. The robot’s open vulnerability made people much more forgiving and they enjoyed the interaction more.
Q: What lessons do you think can be learned from say, Jibo?
I think there were two major challenges with Jibo. One was the timing of Jibo relative to Amazon Alexa, which came out earlier (via the Echo) and had so much more content.
The other thing with Jibo, and also Mayfield Robotics’ Kuri, another beautifully designed robot, is their lack of specific purpose. Jibo was supposed to be a platform that people would program to do interesting things. But consumers want a product that can do things and be useful in a way that is commensurate with its cost. Unfortunately for Jibo, the much cheaper Amazon Echo (and other Alexa products as well as Google Home) could do more (because of having much more content). The adorable Kuri could only take photos. But users want more out of a robot; they want it to have some useful purpose. The purpose does not have to be a physical function, but it has to be something it can do better (more enjoyably) and more cheaply than other devices and technologies.
Q: How did you become part of the startup Embodied that you co-founded? Did they approach you or was it a long-term goal of yours?
Mataric: I was part of the small group of researchers who pioneered the field of SAR back in 2005. I am proud to say that my research team named it; specifically, in a paper published by David Feil-Seifer and me. We are very happy that the name has stuck!
Based on my lab’s work in SAR, I was inspired to see if we could transition the research insights to real-world use. I had a sabbatical in 2016, so that was the time to do a startup. I said, “Let’s get this out to real people”, and I teamed up with my former postdoc Paolo Pirjanian; we talked about various possible applications of SAR for about six months, identified one that had a business proposition, and then went on to incorporate Embodied, Inc., and raised the seed and A rounds of financing.
I was Chief Science Officer of Embodied until November 2018, and now I serve in an advisory role. Embodied is currently in stealth; I believe it will come out of stealth in 2020. Though I’m not involved in the daily operations of the company anymore, I’m hoping that it will be a successful instance of a robot with a purpose.