Getting Robots to Move Like Humans Requires Less Efficiency

Mobile robots encountering other equipment or humans in a warehouse need to be able to move more like humans to avoid potential problems.

September 03, 2019      

Most people have done an awkward dance in the hallway when two people are crossing paths from opposite directions. As robots get integrated into warehouses and other environments, knowing whether a robot will move correctly is creating a new level of awkwardness. Creating new algorithms to make robots move more like humans requires that designers think less about efficiency, and more about behavioral policies, said Russell Toris, director of robotics at Fetch Robotics.

Toris will discuss the idea of creating machine learning for human-like behavior at RoboBusiness 2019. He recently spoke with Robotics Business Review about his session and his premise.

People perceive robots as self-interested, and roboticists develop algorithms that are optimized,” said Toris. “But when you’re in a mixed environment with humans and just using raw sensor data, robots don’t know the difference between a person or a box in the middle of the floor. If robots can identify things in an environment and use machine learning to know how to move around facilities and how to act around forklifts, people actually start to perceive the robots as being more intelligent. As a result, they actually start trusting the robots more.”

Russell Toris Fetch Robotics Efficiency RoboBusiness 2019

Russell Toris, Fetch Robotics

Toris said the key is for robots to predict how the human is going to react, and then make a decision. “In the scenario in the hallway, the human typically moves to the right,” he said. “So if the robot sees that there’s a human tracking towards them, it might decide to swerve off the planned path for a little bit to give the human space.” The concept is similar with a vehicle like a forklift, where humans would move first because they inherently know that it has the right of way. “So we want the robots to be able to do the same thing,” said Toris. “If they see a forklift that’s coming toward them, rather than play chicken, it moves out of the way so the operator on the forklift can get by.”

Getting robots to move this way can be counter to a robot moving in the straightest line, which is all about efficiency and productivity. Toris explained that it’s not about programming inefficiency into the robot, but rather developing behavioral policies. With computer vision algorithms, robots can identify objects or people and use the policies to make decisions.

“Behavioral policies are high-level concepts, such as moving out of the way of things, that a robot may look at every few seconds and decided to deviate away from the path at that point,” Toris said.

In early tests, engineers observed that the robots were moving and stopping near objects that they shouldn’t have – namely, a trash can. They identified that the problem was the shape of the trash can was similar to other robots.  To the human observers, it looked as if the robot was “crazy” or “drunk”.

Toris explained, “One of the associates in the warehouse reported that the robots were definitely moving around things that were also moving. But every now and then it would just go down the hallway and talk to a trashcan for 10 seconds and then keep going.”

Building trust

In addition to movement improvements, the Fetch Robotics team has seen significant changes in workforce interactions when the robots begin behaving more like humans. Teams do things like name them, or even dressing them up. Toris said the unexpected result of this is teams become more engaged in their work and the company.

“We have seen a lot of warehouse associates that are now making contributions, in terms of bringing in different ideas around how and where to utilize the robots to increase efficiency or productivity,” Toris said. “That’s very interesting, because you get someone in the warehouse who previously hasn’t been engaged in its overall success, suddenly interested in the technology and making recommendations around improvement.”

In the end, Toris said it is up to the industry to rethink the human factor. “From a technology standpoint, I think a lot of robotics companies these days are implementing very robust, reliable, and safe solutions. But if we ignore the human/robot interaction factor, a lot of these deployments can actually become less beneficial to a company.”

Toris will speak more on machine learning, vision algorithms  and more at RoboBusiness 2019, Oct. 1 – 3, in Santa Clara, Calif. Robotics Business Review produces RoboBusiness.