Listen to this article
The public’s interest in news surrounding the development of autonomous vehicles (AVs) often feels insatiable. While Tesla may be averse to engaging with the traditional press, they have certainly stirred up plenty of conversation around the future of AVs and the technology behind this rapidly evolving field.
From LiDAR to cameras to Radar technology, leading OEMs are betting on a variety of different combinations of the best perception hardware on the market. Vehicular perception matters a great deal when it comes to safety, marketability, and cost, which is why it’s so compelling that there isn’t a clear consensus on which technology will make it to market first.
But hardware alone does not make an AV autonomous. Today we have the software capabilities necessary to enhance market-tested hardware, and I believe that AI-enhanced Radar technology will play a key role in getting passengers into AVs in the decade ahead.
Today, AI-enabled, ‘smart’ radar sensors are capable of generating images with tens of thousands of pixels per frame and tracking targets that are hundreds of meters away, which in turn enables AV systems to operate safely at high speeds.
The Challenges to Improving Perception in AVs and Getting Them to Market
Today there are four major challenges getting autonomous vehicles into the hands of consumers and onto the road:
The cost to produce hardware-centric solutions remains too high. LiDAR sensors tend to cost hundreds of thousands of dollars, which pushes AVs out of an affordable price point for consumers. Multi-chip Radar arrays that leverage expensive field-programmable gate arrays (FPGAs) present the same issue.
There is no obvious hardware solution when it comes to tackling the weather. LiDAR performs poorly in low light and heavy precipitation, while cameras can be stymied by mud. Radar on its own does not deliver the resolution quality required for AV applications.
The road to AVs must also consider the power requirements of the electric vehicles (EVs) of the near future. To this end, the technology that powers an AV’s perception must not require a large power budget.
In this emerging field, reliability and safety have yet to be proven out for most hardware solutions. Among the top options available today, only Radar has been around and in use in a range of all-weather applications.
Why AI-Powered Radar Presents a Promising Solution
Because longer wavelength Radio waves can penetrate mediums that might scatter or absorb the higher frequency energy of LiDAR, Radar hardware provides a unique opportunity that LiDAR and cameras simply cannot deliver.
There is a significant overlap in the situations in which LiDAR performance and camera vision are degraded or outright obscured. In fog or on dark nights, for example, this could result in an environmental failure mode that will not meet safety standards for AVs.
Radar sensors perform well in many adverse weather conditions and have been field tested for decades in strenuous conditions by militaries around the world, but the major barrier to relying on Radar for perception in AVs has so far been the challenge of improve its image resolution without simply adding more antennae (which, in turn, increases the cost of the vehicle, the size of the apparatus, and its power requirements).
So, is there another way to improve image resolution in Radar? The answer to this question is to apply a software solution that eliminates the choice between cost and performance.
Adding software to an already affordable hardware element that is already present in the vast majority of new vehicles today is far more cost-effective than the other alternatives. Moreover, there is no need to wait to field test Radar. Militaries across the world, for example, have put Radar systems to the test under demanding conditions and in the harshest climates. Augmenting hardware using powerful software saves systems developers time and money, and the functionality of the software itself can be improved over time.
As the automotive industry inspires advances in perception, we will see the capacity of these Radars grow exponentially because they are built on machine learning algorithms that will continue to improve over time.
Improving Radar with AI
By mixing machine learning with synthetic aperture techniques that can keep pace with software innovations, a Radar sensor can send out an adaptive phase-modulated waveform that effectively increases the sensor’s angular resolution by up to a factor of 100.
This innovative approach relies on an adaptive phase-modulated waveform that changes dynamically in real time with the environment – no additional antennas required. This dramatically improves the resolution, increases the range, and widens the field of view without impacting the bill of materials or adding costs to the system.
Today, AI-enabled, ‘smart’ radar sensors are capable of generating images with tens of thousands of pixels per frame and tracking targets that are hundreds of meters away, which in turn enables AV systems to operate safely at high speeds. Perhaps most compelling of all, this approach can be tailored to support advanced driver-assistance systems (ADAS) or autonomous robotic applications, where low energy usage is critical.
Smart Radar Not Just for AV Applications
While the current focus of this autonomous navigation software is automotive perception, the size, power, and performance of these boosted Radar solutions may unlock new opportunities for robotics in other vertical markets.
As the automotive industry inspires advances in perception, we will see the capabilities of software powered, ‘smart’ Radars increase dramatically because they are built on machine learning algorithms that will continue to improve over time. For OEMs, this means that cars will get much better at recognizing pedestrians, objects, and other vehicles, but for scientists and engineers, these advances could be applied to myriad other projects.
While I hope that the small size, low power requirements, and low cost of new Radars entering auto designs in the near term, I am confident that these can help overcome more challenges than perception in AVs.
About the Author
Steven Hong is currently the VP / General Manager of Radar Technology at Ambarella (NASDAQ: AMBA). He joined Ambarella through its acquisition of Oculii, where he was the CEO / Co-Founder, growing the company to become the leading provider of AI Software for Radar Perception. Prior to founding Oculii, Hong was a partner at Kleiner Perkins where he invested in early stage (Seed/Series A) HardTech companies pioneering Autonomous Systems, AI + Machine Learning, IoT, 3D Printing, and Robotics. Before KP, he co-founded Kumu Networks, where he was responsible for product management, fundraising, IP strategy, business development, and marketing. Hong started his career as a management/strategy consultant at McKinsey and Uber, where he specialized in M&A diligence and expansion strategy. He holds a PhD and MS in Electrical Engineering from Stanford University, and a BS in Electrical Engineering from the University of Michigan.
- Oculii – Start-Up Profile – Intelligent Software Increases Resolution, Sensitivity and Range of Radar
- RoboDK – Start-Up Profile – Powerful, Cross Platform, Robotics Simulation & Programming Environment
- Start-up Profile – Southie Autonomy – Simplified Industrial Robotics Programming Via a Gesture-based Interface
- Start-up Profile – Omnirobotic – Reducing the Cost of Robot Programming for ‘High Mix’ Manufacturers
- Prophesee Debuts Metavision Intelligence Suite SDK as Part of Ecosystem for Event-Based Visioning
- TriVision A/S – Start-Up Profile – Machine Vision Solutions for Food and Packaging Inspection
- Start-up Profile – DreamVu – Novel 360° 3D Vision System Enables VR, Localization, Mapping, Object Tracking, More