Company: Carnegie Robotics
Founders & Principals: John Bares, President and CEO; David LaRose, Director of Software Engineering; William Ross, Principal Engineer; Eric Meyhofer, Bryan Salesky, Andrew Bagnell, Alonzo Kelly, Daniel Beaven, CFO
Products: MultiSense-S7, MultiSense-SL, MultiSense-S21, and Smooth Pose (a GPS-denied positioning system).
What it is: Perception sensing system for out-of-the-factory applications
Market niche: Outdoor (and harsh environment) robotics applications, ranging from agriculture to mining to military applications
Industry Partnerships: RowBot, Carnegie Mellon National Robotics Engineering Center (NREC)
Contact: Email all inquiries to [email protected]
Why it was founded: During his 13-year tenure at the helm of the NREC, John Bares helped birth dozens of advanced prototypes out of NREC?s labs. But he ?always had an itch to go the next step? and build a commercial product. His dream was to push robotics out of the lab and the factory and into the world. In 2010, he saw an opportunity to do just that.
RBR’s Take: Carnegie is tight-lipped about its funding and partnerships, though it does disclose an ongoing strategic relationship with Bares? former employer, NREC. I originally began researching Carnegie on a hunch it was on the list of Google?s acquisitions, and the company?s work in dynamic, outdoor environments, self-driving vehicles, and focus on computationally intensive problems make it a good candidate. However, Bares denies that the company is working with the giant.
Still, I?m surprised. A system like Carnegie?s would be a huge boon to any number of Google projects, including the company?s self-driving cars initiative, Glass, and whatever else it has up its sleeve with the recent robotics acquisitions. Carnegie is likely to have more competitors in the space soon, but for now, it?s leading the pack in high-speed, high-resolution computer vision outside the safety of four walls.
“The Rise of Vision Systems” is a free webcast that will analyze the vision-guided robotics market. Join RBR analyst Jim Nash and Jeff Bier, founder of the Embedded Vision Alliance, for an in-depth look at market drivers, trends and opportunities, applications, top-selling products, key players and more. Register
How Carnegie Got Started
Like many other robotics startups, Carnegie?s origin started with the recent, rapid improvement in both the economics and availability of sensors and chips. Cameras, lasers, and other perceptual sensors have proliferated, shrinking in size, volume and weight, while simultaneously rapidly improving in performance and energy use. That has made it possible for companies to build low-cost, high-performance systems for new markets.
In the case of Carnegie, that market is beyond-the-factory robotics applications. ?Robotics have done incredible things in the factory,? CEO and Founder John Bares said. But ?since the late 80s, we?ve been pushing outside the factory, and it?s hard. Conditions change – weather, lighting, conditions, there are people.?
To work in this environment, a robot needs a lot more information. In other words: access to more sensor-supplied data. The changes in the market for core hardware technologies created just the opportunity he?d been looking for.
Inside the MultiSense System
Carnegie?s MultiSense system uses cameras, and sometimes lasers, as well as chips that cost in the $200 range and take just 10 watts of power, Bares said. The chips process 20 million range points per second, directly on the vision system?s chips (rather than relaying the data to an external computer for processing, using proprietary algorithms (its secret sauce) to crunch the data into very detailed 3D images.
Carnegie’s MultiSense-S7 is a compact and accurate 3D range sensor
Because Carnegie is using stereoscopic vision, rather than infrared detection (used by Microsoft?s Kinect, OmniVision, and Industrial Perception), to create those images, Carnegie?s system works outdoors as well as it does indoors. Laser-based vision systems, another competitive area, require the use of moving parts and take 15 times longer to produce a complete image than Carnegie?s stereoscopic system.
But really, it?s the rapid, on-board production of highly detailed images that sets Carnegie apart from others in the industry. By providing high-resolution, low-latency mapping of the physical environment, Carnegie?s technology enables robots to begin working in far more dynamic environments. For example, the data may be used by Carnegie?s customers to identify changing features of an environment (for example, a human moving across a landscape) or to help a robot grasp and manipulate objects in their environment without knowing ahead of time their size or shape or.
If you saw the DARPA Robotics Challenge Trials in December 2013, you got a glimpse of Carnegie’s technology. Atlas, the Boston Dynamics-constructed robot supplied by DARPA to six of the competing teams, sported Carnegie?s stereoscopic vision system. In the DARPA challenge, Carnegie?s system is what enabled competitors to identify and turn a valve – one of the challenge?s assigned tasks.
?Our system is on the head of six of the eight top teams that finished; we?re pretty proud of that,? Bares said.
You?ll notice, however, if you watch videos of the DARPA bots, that they move quite slowly. While Carnegie?s system is pushing into real-time processing, these are computationally-intensive tasks. ?The raw data is there now, but the software is going to take some time,? Bares said.
With improvements in the software, Bares said the possibilities for its system are huge; collecting 20 million points per second is enough, he said, that if you get your algorithms right, ?you can do things like determine human intent.? Is someone about to step into the path of the robot, based on their position and trajectory? It?s a question Bares can?t answer yet, but it?s a direction he?s eager to pursue.
Beyond the DARPA challenge, Bares said Carnegie is working on a variety of other applications where traditional positioning systems don?t work; by relying on information in the physical environment to guide positioning, rather than GPS, for example, mining operations can make greater use of self-driving vehicles (of the mining sort, rather than the automobile sort) in dangerous environments.
Carnegie is also working on agricultural applications. Bares said it?s technology is being tested for sorting strawberry seedlings, and the company is partnering with Minnesota-based RowBot to enable highly targeted fertilizer application in cornfields. These are environments where detailed environmental factors and fine motor skills have ensured that humans, not robots, were required to do the job. Carnegie?s systems could change that.