December 06, 2013      

As the newly minted startup columnist for Robotics Business Review, I’ve been out beating the streets for some of the coolest, most interesting robotics startups out there.

In the course of my reporting, I recently heard a rumor about a new Andy Rubin-led robotics venture within Google, which was confirmed in a New York Times article by John Markoff.

Under the leadership of Rubin, the former Android head, the company has acquired at least seven robotics startups–and sources tell me that at least two more companies are in the pipeline.

The story has been in heavy rotation among mainstream and tech media, alike. But even though the cat is out of the bag, Google–as usual–isn’t saying much about what exactly it’s up to. The new division has an aura of secrecy similar to that of Google X back in 2011.

I spoke with a handful of insiders at Google who say they were as surprised by the news as those outside the company, despite being well positioned for advance notice of such an initiative.

As with Google X, the structure and purpose of the new robotics push is largely still TBD. Within the acquired companies, it’s clear that the details are still being worked out internally, as well. Operations such as payroll are still being handled by some of the startups themselves, and many of the companies seem to be happily continuing the work they’d been doing prior to the acquisition. The mothership has, thus far, not set an agenda or done much in the way of encouraging interaction between the teams.

The robotics dream team that Google is assembling incorporates aspects of the most exciting areas of the field today, including computer vision, high-precision automation, and humanoid design. The major link between the firms is that these are all software-intensive robotics companies:

Schaft, Inc., a Japanese firm, has been developing compact robotics with more muscle than their predecessors. The Schaft robots strength is due to specialized motors–but also thanks to “advanced bipedal control algorithms,” according to a report in IEEE Spectrum earlier this year. In other words, these are robots that work smarter, not harder, to stand up to greater force.

Meka Robtics, which is joined by Redwood Robotics (a joint venture between Meka, Willow Garage, and SRI), is another example of the smarter approach to humanoid robotic design. Through the use of its M3 real-time control system, Meka is developing adaptable, responsive hardware elements, ranging from humanoid heads to hands, grippers, arms and manipulators.

Bot & Dolly, which was acquired along with its founders’ design firm Autofuss, has a similar software-driven secret sauce, BD Move, that leverages existing hardware technology (KUKA industrial robots) to enable high-precision movement control and automation. (Autofuss had handled the Nexus product line launch for Google, previously.)

Industrial Perception (IPI), which is a leader in computer vision, was working on solving problems in the manufacturing arena, including collision avoidance, mixed case handling, and bin picking, which required sophisticated image processing, to enable rapid responsiveness to image recognition, depth perception, and more.

Holomni, the most stealthy of the announced acquisitions, has been described as making high-tech wheels. What it looks like, to me, is directly controllable casters, which would allow for high-precision, 360-degree movement. However it’s being done, it’s a safe bet that the company is using a software-intensive approach to its powered wheels.

At least two others are in the works, according to one source with knowledge of the new company. Rumors suggest that DARPA-darling Boston Dynamics–which has demonstrated a wide range of robotic systems–could be on the list of potential acquisitions, but employees at the company declined to comment.

It’s not likely that all of the companies will be integrated into the company in exactly the same way. First of all, there’s physical space to be considered. The group’s headquarters–said to be on the second floor of an R&D building in Palo Alto–aren’t ideally suited for the heavy robotics work of a company like Bot & Dolly. That doesn’t mean other functions–such as software teams, management, and business staff–won’t be relocated to the headquarters. That’s been the experience of previous big-equipment buys like Makani Power, which continues to occupy its former headquarters on the former naval base in Alameda, Calif.

Secondly, each of the companies is likely to have expertise that is of value to many other aspects of Google’s business. As nearly all of the industry watchers agree, Google’s big revenue opportunities are clearly linked to logistics and manufacturing. But it seems just as likely that the robotics technologies that the company has been snapping up will play an important role in many projects throughout the sprawling organization. Google, like many others, may be simply placing a bet that the future of computing is inextricably linked with hardware and physical goods.

Today, the “Internet of Things” world is booming; we’re seeing dramatic growth in the number of (and interest in) network-connected devices, at both the consumer and the corporate level. Everything from smartphone apps to wearable electronics to real-time data platforms for industrial environments are evidence of this growing trend. Consumer hardware is undergoing a major revolution, with a focus on niche markets, customizable devices, and services alongside the hardware.

Robotics sits squarely in the middle of these trends, playing a critical role in blending the digital and physical worlds. Certainly, the opportunity is there for Google to use its new division to push advanced manufacturing as Apple has been doing with its own $10.5 billion robotics and manufacturing technology push, and to transport and deliver physical goods as Amazon is looking to do with its drone delivery service ambitions. And these will likely be part of the company’s roadmap.

But as I see it, the acquisitions are more broadly positioned than that, and Google’s vision of how it will approach these markets could be tied into other, broader shifts in how we interact with technology.

Bot & Dolly was one of the most exciting startups–in any industry–that I’d spoken with recently. In part, it’s hard not to get excited by the atmosphere at the company’s offices, which are a joyful mix of art, technology, and urban culture. But that wasn’t all. The company’s outsider approach to technology also sent my spidey-sense tingling: its intuitive, integrative use of both hardware and software is something that I increasingly see as a hallmark of future-facing technologies.

And that’s where Google is likely to get the most mileage out of its acquisitions.

There’s an enormous amount of excellent technology in the marketplace today, and Bot & Dolly’s products suggested to me a future in which even someone like me–a writer, not a roboticist–could put that technology to use almost effortlessly. It seems perfectly logical that a company like Google would see the potential for such a technology.

Another bit of evidence is that several of the companies that Google has acquired–Schaft, Meka Robotics and Redwood Systems–are leaders in the humanoid robotics space. There are certainly important applications for human-scale and human-safe robotics within logistics and manufacturing. But the thinking and design behind such devices is also broadly applicable to a shifting model of how we interact with technology: they blur the line between human and technological systems.

It may sound laughably futuristic in some circles, but consider the ways in which we’ve already blurred those lines in the information and communication technology space.

Wireless broadband, location-aware and context-sensitive technologies, and new interfaces such as touchscreens and voice recognition have already allowed us to tap into a wealth of information from our peers and expert sources on a moment’s notice. Is it much of a stretch to imagine that hardware will make similar gains in the not-too-distant future?

And who better to lead it that the man behind Android, which already sits at the intersection between software, hardware, and their human users?