June 01, 2014      

If previous generations of robots were typified by durability, task repeatability and speed, the next generation, thanks to the addition of real-time embedded vision systems, will be known for nimbleness, flexibility, safety and greater
precision.

This latest evolution is being ushered in by real-time smart cameras with three-dimension and multiple-spectrum capabilities; brain-simulating algorithms and powerful dedicated processors.

These exciting developments were examined from a practical perspective at the May 2014 Embedded Vision Summit conference on October 29th at the Santa Clara Convention Center in Santa Clara, CA.

The fourth such event, hosted by The Embedded Vision Alliance, opened to record attendance with 500 audience members?a 30 percent increase over last year. The goal of the Summits is to provide technical educational forums for product creators interested in incorporating visual intelligence into electronic systems and software. Leaders in the field tackled challenges and offered a generous peek at cutting edge developments in this rapidly evolving industry.

According to Jeff Bier, president of Berkley Design Technology, Inc. (BDTI) and founder of the Embedded Vision Alliance, the success of this year?s event and the breadth of industries represented is validation that embedded vision is on it?s way to becoming ubiquitous.

?The majority of the audience are either developing or planning to develop a system that utilizes embedded vision. And they are from just about every industry sector?it?s clearly a big wave,? Bier said.

The conference opened with a remarkable keynote on Convolutional Networks from Yann LeCun, Drector of AI Research, Facebook and Silver Professor at New York University. ConvNets are a particular embodiment of the concept of “deep learning” in which all the layers in a multi-layer architecture are subject to training.

LeCun shared state-of-the-art research and applications driving everything from real-time 3D puppeteering to off-road mobile robotics. Both Facebook (who notably acquired facial recognition pioneer Face.com) and Google are both actively exploiting facial and object recognition technologies alongside broader machine learning capabilities as they continue to represent the cutting edge in disruptive product development.

Bier sees embedded vision disrupting a broad range of industries, but those with fewer regulatory hurdles to overcome are advancing at a faster pace. Toys, mobile and other consumer devices are rapidly harnessing the power of embedded vision?what Bier refers to as the ?software defined sensor??to make relatively low-cost devices featuring never-before-seen 3D vision and depth tracking synthesized with facial, gesture and voice recognition capabilities.

Vision-enabled Mobility

Image recognition combined with object detection and avoidance are the key capabilities tying embedded vision to the development of the robotics industry. More precise and reliable autonomous navigation continues to be a major force advancing new robotics applications.

The global mobile robotics market was worth $6,249.6 million in 2012 and is expected to reach $14,202.2 million by 2019 due to increased usage of mobile robots in domestic, agricultural, medical and industrial applications. Although such technologies are key to developing safe navigation in warehouse environments, Bier suggests that the killer app may be driver safety.

Automotive manufacturers like BMW, Volkswagen and Toyota are already manufacturing vehicles with vision and, as Bier points out, consumers are buying them.

?The automotive industry doesn?t dabble,? Bier says. ?The technology has to be robust, reliable and low cost?the fact that they are integrating vision indicates the technology is there.?

Self-driving Vehicles: The Killer App?

The event?s second keynote presenter, Nathaniel Fairfield, Technical Lead of Google?s Self-Driving Car Team, placed the importance of embedded vision in robotic vehicles into context.

?Road accidents are the leading cause of death for Americans between the ages of four and 35. This is something we can fix,? Fairfield said.

Over 90 percent of those accidents are due to human error. Fairfield went on to describe how Google seeks to transform mobility by creating scalable virtual infrastructures rather than waiting for less flexible transportation infrastructures to be built.

Synchronizing precise, heavily annotated maps along with lasers, radar, cameras and advanced planning software, the company?s goal is a vehicle with full autonomy?and no driver fall-back mode. A network of redundant systems under the hood will account for safety in cases of electrical malfunction.

Fairfield noted that embedded vision is essential to capture and identify the characteristics these autonomous vehicles can?t sense with active sensors (i.e. the color of traffic lights or road signs).

A whole new class of vehicle, not just cars with advanced functionality, is what?s needed to solve the critical global road safety issue. This is an industry where embedded vision will thrive, but regulatory requirements must be adjusted before growth on an even more monumental scale can take place.

(Read more about the Driverless Vehicle industry in RBR’s special Industry Report by analyst Jim Nash.)

Adoption Challenges

Bier described the greatest challenge to embedded vision adoption being knowledge and awareness. Bier said that he, like most engineers when they first encountered computer vision, thought of it as a technology akin to nuclear fusion. The thought was ??That?s really cool, but I?ll never have it.??

As the cost and size of cameras and processors have dropped dramatically over the last decades, machine vision is finally graspable.

?First you need to realize it’s possible and second you need to feel comfortable,? Bier said.

The mission of the Embedded Vision Alliance, the organizational body behind the Embedded Vision Summits, is to inspire and empower embedded system providers to achieve that level of comfort with embedded vision technology. A key means of achieving this goal is providing system design engineers with the practical know-how they need in order to effectively incorporate embedded vision technology into their designs.

The May Summit accomplished this with streamlined tracks tackling interactivity and augmented reality, visual tracking and motion, object detection in images and developing software and hardware for next-gen vision systems.

“The extremely strong turnout this year validates the vision we had four years ago that this technology is ready for mainstream adoption,” Bier said.

For more analysis and market statistics, download RBR’s special Industry Report on vision systems here.