June 11, 2015      

This article originally appeared on MIT News.

Last weekend was the final round of competition in the U.S. Defense Advanced Research Projects Agency’s (DARPA) contest to design control systems for a humanoid robot that could climb a ladder, remove debris, drive a utility vehicle, and perform several other tasks related to a hypothetical disaster. The team representing MIT finished sixth out of a field of 25.

But before the competition, the team’s leader, Russ Tedrake, an associate professor of computer science and engineering, said, “I feel as if we’ve already won, because of all the amazing research our students did” – including a paper that won the overall best-paper award at the 2014 International Conference on Humanoid Robots.

Optima primed

In control theory, control of a dynamic system – such as a robot, an airplane, or a power grid – is often treated as an optimization problem. The trick is to contrive a mathematical function whose minimum value represents a desired state of the system. Control is then a matter of finding that minimum and figuring out how to continuously nudge the system back toward it.

MUST-WATCH: 14 Epic Fails from the DARPA Robotics Challenge Finals

Optimization problems can be enormously complex, so they’re frequently used for offline analysis – for example, to determine how well much simpler control algorithms will work. But from the get-go, Tedrake decided that the MIT team’s control algorithms would solve optimization problems on the fly. That required innovation on multiple fronts.


Team MIT completed seven of the eight tasks in 50:25 at the DRC Finals. (Video via MIT CSAIL YouTube page).

Pressure centers

Control of an autonomous robot can be divided, roughly, between two types of algorithms: a motion planner, which determines how a robot should go about executing a task, and a controller, which sends control signals to the robot’s joints during the task’s execution.

When a bipedal robot takes a step, its foot strikes the ground at a number of different points, with experience different forces over time. A function that factors in all those forces would be difficult to optimize, but it becomes much more tractable if the forces are treated as acting on each foot at a single point.

SEE ALSO: Why So Many Robots Struggled with the DARPA Challenge

The MIT researchers found a way to generalize that approach to more complex motions in three dimensions. So their planner also factors in contacts between the robot’s arms, and even the objects the robot is manipulating, and the surrounding environment.

Further, the planner considers the forces exerted by those contacts in six dimensions rather than three – adding rotational forces in three dimensions to the standard linear forces. It also factors in environmental constraints, such as avoiding collision with nearby objects or keeping the objects the robot is manipulating within view of its laser rangefinder.

Finally, it lumps all these factors together into one big optimization problem. So rather than planning points of contact and then calculating the resulting forces, the algorithm chooses just those points of contact that minimize displacement of the robot’s center of gravity, while still accommodating environmental constraints.