September 22, 2014      

As a result of a massive restructuring announced in July 2014, Microsoft has laid off its robotics team, which was part of Microsoft Research.

Microsoft has yet to announce this publicly, but robotics team leader Ashley Feniello announced the shutdown in a blog and on Twitter by tweeting, “sadly, the Microsoft robotics team has been shut down. My card key stops working tomorrow afternoon [Sept. 19].”

This was part of a second round of layoffs that occurred on Sept. 18 in which the Redmond, Wash.-based company eliminated 2,100 jobs as part of CEO Satya Nadella’s plan to cut 18,000 positions.

Some of Microsoft’s on-going robotics projects included the “Institute for Personal Robots in Education” that included resources and robot kits for use in computer science education, in a joint effort with Georgia Tech and Bryn Mawr College, and “Human-Robot Interaction Research” in eight universities and other labs around the world.

Feniello also tweeted a link to the “last bit of Microsoft robotics research,” a paper about program synthesis for manipulation tasks (pdf).

Here’s more about what the Microsoft robotics team focused on: “The robots are coming! Actually, they are already here – in our homes, workplaces, transport systems, and even places of education and entertainment. Robots are increasingly among us, sharing our world, and it is important to understand how we can best interact in order to help humanity. This is the field of Human-Robot Interaction (HRI), where our research is currently primarily focused. We have also actively researched the use of robots as a compelling context for teaching such subjects as beginner computer science and programming.”

A Once Promising Future for Microsoft Robotics

The business world has claimed yet another promising robotics entity. Unbounded Robotics, an RBR50 company known for its UBR-1 service robot, shut down in August 2014 due to issues with its “Willow Garage spin off agreement that prevents us from raising series A investment.”

While we now won’t get to see any new developments from Microsoft’s robotics division, the company will continue to impact the robotics world. Its Kinect sensor, originally designed for the gaming industry, has been embraced as a cheap and reliable way to provide both a sense of depth and object-detector to robots that need to map and navigate their surroundings. Now-defunct Willow Garage even sold a $500 open source robotics kit that incorporates the Kinect, while the previous non-Kinect version cost $280,000.

In Feb. 2012, Robotics Business Review posted an in-depth feature “3D Sensing: Kinect-ing Robots to Their Environment” about the sensor’s ground-breaking development for robotics. Here is how we described potential robotics applications of Kinect:

Kinect’s low price, simple mechanical and electrical packaging, and interface quickly attracted hobbyists and researchers looking for interesting ways to interact with robots, and interesting ways for robots to interact with their environments. Robotics developers have already demonstrated platforms that use Kinect for navigation, control, and interfacing with operators.
Kinect_face

One of the most compelling uses of Kinect is for terrain mapping and obstacle avoidance. All autonomous (or even semi-autonomous) mobile platforms must have some way of detecting features in their environment so they may safely move around within it. Kinect’s 3D camera can generate a point cloud-similar to that generated by arrays of laser-based LIDAR systems-that may be used to create a map of an environment, including such things as walls, doorways, and desks in an office. The 3D camera can also detect features of both indoor and outdoor terrain, such as stairs, boulders, and inclines. A robot can then use this information to determine a clear path to follow. Additionally, Kinect can be used for obstacle detection, both to avoid existing obstacles, such as furniture, or those that suddenly appear, such as a person suddenly stepping in the path of a robot.

Another of Kinect’s gaming capabilities, gesture control, is also applicable to robotics. A person might use Kinect to control a mobile robot platform, using a standardized set of gestures to instruct it to “stop,” “go forward,” “turn left,” and so forth. In many ways, this is the next iteration of the wireless supervisory control pioneered by developers using the Nintendo Wiimote to achieve a similar control interface. But significantly, Kinect enables more complex interactions than the Wiimote. For example, a humanoid robot might be trained to follow a certain pattern of movements by using Kinect to watch the operator perform the sequence first. Kinect would enable the robot to recognize the person?s limbs, head, torso, and other objects that are perhaps held by the person, such as a pointer-something that is currently not possible with the Wiimote and other remote controls.

Yet another potential robotics application is facial-expression recognition. Kinect is designed to differentiate (though not necessarily interpret) expressions out of the box. If programmers can access that capability and expand it with recognition of emotions, coupled with an “emotional” response, it could result in robots that are more social and thus better able to interact with their human operators.

Going forward, Kinect still seems to be in position to continue playing an important role in computer vision, mobility, and human-robot interaction.