MIT SLAM System Helps Robots Better Identify Objects

Modular Robotics offers hands-on robotic construction kits for kids. Learn how Modular grew from a Ph.D. research project into the Boulder, Colo.-based company it is today.

July 27, 2015      

This article originally appeared on MIT News.

John Leonard’s group in the MIT Department of Mechanical Engineering specializes in SLAM, or simultaneous localization and mapping, the technique whereby mobile autonomous robots map their environments and determine their locations.

Last week, at the Robotics Science and Systems conference, members of Leonard’s group presented a new paper (PDF) demonstrating how SLAM can be used to improve object-recognition systems, which will be a vital component of future robots that have to manipulate the objects around them in arbitrary ways.

The system uses SLAM information to augment existing object-recognition algorithms. Its performance should thus continue to improve as computer-vision researchers develop better recognition software, and roboticists develop better SLAM software.

“Considering object recognition as a black box, and considering SLAM as a black box, how do you integrate them in a nice manner?” asks Sudeep Pillai, a graduate student in computer science and engineering and first author on the new paper. “How do you incorporate probabilities from each viewpoint over time? That’s really what we wanted to achieve.”

Despite working with existing SLAM and object-recognition algorithms, however, and despite using only the output of an ordinary video camera, the system’s performance is already comparable to that of special-purpose robotic object-recognition systems that factor in depth measurements as well as visual information.

And of course, because the system can fuse information captured from different camera angles, it fares much better than object-recognition systems trying to identify objects in still images.

Drawing boundaries

Before hazarding a guess about which objects an image contains, Pillai says, newer object-recognition systems first try to identify the boundaries between objects. On the basis of a preliminary analysis of color transitions, they’ll divide an image into rectangular regions that probably contain objects of some sort. Then they’ll run a recognition algorithm on just the pixels inside each rectangle.

To get a good result, a classical object-recognition system may have to redraw those rectangles thousands of times. From some perspectives, for instance, two objects standing next to each other might look like one, particularly if they’re similarly colored. The system would have to test the hypothesis that lumps them together, as well as hypotheses that treat them as separate.