Controlling a robot with a smartphone may sound like child’s play, but to James Kuffner Jr., it’s serious business. Kuffner, who is currently on leave as an associate professor at Carnegie Mellon’s Robotics Institute in Pittsburgh to work for Google as a research engineer, says the advent of smartphones and faster wireless connections is making the notion of cloud-based robotics a reality.
Just as thin clients move computing power from a laptop into cloud-based servers, robots also no longer need to carry around information or handle processing-intensive tasks such as vision recognition internally, which means they can be built lighter and at less cost. Google dramatically demonstrated this fact last year when it set eight self-driving Toyota Priuses loose on California’s heavily trafficked roadways.
Together, the cars clocked more than 140,000 miles using video cameras, radar sensors, and a laser range finder, along with Google Maps and data processing, which allowed them to “see” surrounding traffic. The project’s ultimate goal is to study ways to prevent traffic accidents and reduce carbon emissions, as well as change the way in which cars are used.
“This really hasn’t been something that has been taken seriously, because connectivity to a cloud requires a lot of infrastructure and, at least years ago, it was thought of as being too expensive,” says Kuffner. But now, he says, cloud services from companies like Google, Amazon, and Microsoft are making possible the idea of linking robots with remote servers via reliable, high-speed, high-bandwidth networks.
As Kuffner explains, “Mobile devices with high bandwidth and great connectivity and strong signals anywhere are something people now take for granted. So it totally makes sense that a robot could have a wireless antenna cheaply made and could offload processing or data storage to cloud services.”
Kuffner believes storing data on the cloud and having robots communicate with a remote infrastructure will also help them achieve a higher level of performance. That vision is shared by Jason Milgram, CEO of cloud platform provider Linxter, based in Cooper City, Fla. Milgram says the public cloud model, as opposed to a private network, will greatly reduce the costs associated with robotics-based application.
“Cloud robotics allows a robot’s brain to be distributed to one or more disparate computer systems running on the public Internet,” says Milgram. “This, in turn, allows the footprint of the actual robot, an electromechanical machine, to be smaller, by enabling more of its problem-solving abilities to be done [or] computed remotely.”
Milgram recently demonstrated how using the cloud instead of a private network greatly reduces the costs associated with robotics. He developed LinxterBot, a low-cost cloud robotics project built using off-the-shelf components. Users can control LinxterBot’s movement through its mounted webcam. In addition, Milgram says the distributed approach gives him greater flexibility in the programmatic design.
For example, he notes that he was able to “more easily make dynamic changes to its brain functions without needing direct access to the actual robot and without requiring its downtime or direct reconfiguration.”
Not quite ready for primetime?
While industry observers see the potential for cloud-based robotics applications in a variety of industries and areas, including household, industrial, medical, and defense, the concept is not ideal for everything. Robots risk losing their Internet connection and causing downtime. Kuffner says that any bandwidth-intensive functions-real-time analysis of complex visual information, for example-are also not ideal for a cloud-operated robot.
Randy Arthur, CTO of Falls Church, Va.-based CSC’s Trusted Cloud Services practice, says people need to think long and hard about using a cloud robotics model for mission-critical applications. “Robotics lives at the end of the network and has to report back to the mother ship, if you will.”
He says there is a “great reluctance” to forge ahead with robots on centrally controlled networks for applications that use time-sensitive processes as well as those requiring monitoring, feedback, specific temperatures, and flow volumes. This is because brief outages can and do occur.
If the network connection is lost, even for a few seconds, “it could be catastrophic,” Arthur notes. “So the question is how much of a network outage can you tolerate?”
Widespread deployment of real-world cloud robotics applications hasn’t arrived yet, notes Milgram. In existing cases, private clouds tend to be used. As an example, he says, the U.S. military uses robots to retrieve wounded soldiers from a battle zone to a semi-protected area, where they can be moved for treatment, sparing a medic this dangerous task. But such private, proprietary networks are an expensive proposition.
“The big change will be using distributed computing in the cloud, or the public Internet,” he says. Applications designed for a public cloud will come about as the result of hobbyists creating new ideas and experimenting with sensors and “all the input they can receive.”
Kuffner sees big potential for cloud robotics in the consumer and education industries, which already have vacuum cleaners like the iRobot Roomba and toy robots like the Lego Mindstorms. While some of those kits are very expensive, he says, if you had a smartphone or tablet, you’d already have the most costly parts needed to create a robot, since the camera, speaker, microphone, accelerometer, touchscreen, wireless antennas and battery are prepackaged in a phone or tablet.
“The brains of the robot would be this smartphone with great connectivity to cloud services,” Kuffner says. Indeed, iRobot recently demonstrated how its AVA robot could work in conjunction with an Android OS tablet.
Some robotics applications for use in education, home, and office, along with toys and entertainment, are becoming cheaper, he adds. Such applications, he says, “have always been possible, but they are becoming more practical because wireless communications is ubiquitous and the costs are coming down. So we’ll see more of these novel applications, and I believe the demand for cloud robotics will increase.”
What needs to happen?
Perhaps not surprisingly, as far back as 2007 Microsoft co-founder Bill Gates also envisioned robots playing a significant role in areas including medical, the military, and the toy industry in the not-too-distant future. However, in the January 2007 issue of Scientific American, Gates noted several obstacles standing in the way.
“Robotics companies have no standard operating software that could allow popular application programs to run in a variety of devices,” he wrote. “The standardization of robotic proces-sors and other hardware is limited, and very little of the pro-gramming code used in one machine can be applied to an-other. Whenever somebody wants to build a new robot, they usually have to start from square one.”
Heather Knight, a CMU roboticist, likewise believes a set of common standards is needed before cloud robotics can have a significant impact.
“There are major developments that need to take place before any robot will automatically and autonomously improve its own skill set in a dramatic and unpredictable way,” wrote Knight in an email interview. “To have even a smidgen of impact at that level, common hardware platforms would be very helpful, but adoption of common software and architectures would be required-a holy grail that has been evading the field for ages, perhaps because [it is] just awaiting the right commercial application.”
Arthur at CSC questions how a commercial model can be developed for robotics to fit into the cloud paradigm, since traditionally, the idea of cloud computing is to turn capital expenses into operational expenses.
“How would you actually recover the costs as a provider, and how would you consume services as a consumer financially?” he asked. “The whole idea of cloud computing is it’s very easy to consume and there are no strings attached … when you’re done you can walk away and the provider has to deal with the expense.”
Kuffner, naturally, foresees a lot of development taking place on Android devices, since it is an open ecosystem for developers of smartphones and tablets. “The code is open source, and it’s easy to create your own applications … and the same is true for robots, we believe.”
Echoing Knight, he says, “The industry doesn’t yet have many standards. But Google would like [to see] open standards so … hobbyists can develop their niche applications and make them available.”
Google engineers are working closely with the Android team so that phones and tablets can process sensor data and communicate with the company’s services, he says. The ability to access common smartphone and tablet features, such as the GPS, compass, and an accelerometer, is important if a roboticist wants to write an application to say, identify people around it, he says.
Multirobot systems can use the cloud to efficiently deliver goods, monitor and address environmental issues, or even move around to maximize cell phone coverage as usage patterns change, says Knight, who is also founder of New York-based Marilyn Monrobot Labs, which creates socially intelligent robot performances.
What the future holds
Knight sees the use of friendly robots in the everyday environment as the next frontier for robotics.
“Partnering the cloud with new advances in machines’ social and general intelligence is already throwing open doors for exciting and lucrative new possibilities with robotics,” she says.
These dual capacities, information savvy and charismatic communication, “will fuel a new friendly robot revolution that will forever impact the way we interact [with] and grow up with machines.”
After all, she adds, people engage differently with physically present characters and are more likely to adopt a technology they enjoy interacting with. “This new class of machines can sense and act on the world, exploring shared autonomy (e.g., safer cars), charming us in our workplaces, schools, and homes.”
Milgram at Linxter agrees. “One important use that interests me is on the home automation front with robotics,” he says. “I think there’s a place for cloud robotics helping people in managing their household and all the tasks and chores that exist.”
Artificial intelligence applications such as voice and object recognition can also play a significant role in a cloud-based robotics model, industry observers say, since they benefit dramatically from cloud data and computation.
“Machine learning requires a massive set of examples to build and train its accuracy and understanding, and the cloud is already helping us perform processing in real time, from recognizing images to interpreting voice,” says Knight.
A robot could ask an online database for a match if it doesn’t recognize a new object, such as a cup, she says. Already, we have the ability to text people using voice recognition, which works if there is 3G or Wi-Fi, so the recognition is being done over the network.
“Off-board processing lets other computers do some of the heavy lifting” and can allow robots to be lighter, cheaper, and more agile, she says.
In addition, Google’s new Google Goggles service lets users take a picture of something and transmit the image to its cloud server where it compares the image with millions of others indexed in its databases to send back more information.
Kuffner thinks it would be useful to extend the same capabilities to a robot. “So if you ask the robot to bring you a can of Coke, it would work really well because it scans its surrounding and delivers it to you. This matching of images to do that could be done in the cloud.”
If the robot gets something wrong, it could also be taught how to identify something, and that information could potentially be published in a cloud service, creating a shared knowledge base so that the next robot will not make the same mistake, he says.
Knight agrees. “I have to believe there is the ability for cloud robotics to enable local networks of robots to upgrade their skills instantly,” she says, “just as cloud-based applications and upgrades can be downloaded to a company’s entire inventory of desktop PCs.”
The ability to reduce technical complexity and cost will enable more people to participate in the field of cloud robotics, Milgram says, since traditionally, in computing, as something is made easier and more accessible, more developers and engineers start to use it. That, in turn, leads to innovation.
“So as programming evolved from traditional assembly language to higher-level languages … more people entered the [computing] field and did more innovative things,” he says. “The same will happen with cloud robotics.”
Kuffner is equally effusive. “We’re scheduled at Google to either create something new that will have a hundred million users or make billions in revenue or something new that will change the world. I’m hoping if this cloud robotics initiative takes off, it will do all three.”
- “Cloud Computing Framework for Service Robots” (from the 2010 IEEE International Conference on Robots and Automation). Note: subscription required.