January 05, 2014      

The technologies needed to make autonomous vehicles a reality exist. They just need to be refined for flawless, near-continuous use in challenging settings. No small feat, but barriers are comparatively low.

The electronics required will be the easiest part, and because of that, they will likely provide modest returns. The work to be done — on lidars, for example — amounts to miniaturization, ruggedization and higher signal fidelity.

The real opportunities lie in software and information.

Security

Security will be more important for driverless vehicles than other kinds of computing because an infected PC cannot ram other PCs. Effective encryption algorithms and physical shielding already are needed.

Increasingly common tire-pressure-warning systems, which wirelessly transmit pressure levels and unique IDs from sensors to vehicle computers, can be intercepted to secretly track individual vehicles. Much more information will be at risk as electronics take over driving tasks, record what happens in vehicles and — by design — share it with governments and marketers.

Information sharing, in fact, will be a boon for companies adept at collecting, handling and analyzing massive stores of data. Driverless roads would present the biggest Big Data project ever.

More ominous are demonstrations of how some cars today can be hijacked outright by someone with a laptop in a following vehicle.

Among the few firms working on driverless-vehicle security are Security Innovation, Inc. and IOActive, Inc..

Machine Vision

Machine vision has been difficult to perfect. It calls for image-processing computers using sophisticated algorithms to interpret video feeds of their complex surroundings. Machine vision is paired with lidar in Google Inc.’s semi-autonomous cars. Lidar records 3D images, and machine vision tries to read signs that are blank to lidar.

Mobileye Technologies Unlimited
is making a name for itself with a camera-only driver-assist product that warns of imminent collisions. Iteris Inc. sells cameras, software and services for interpreting traffic, but its products likely could be fitted into vehicles.

Machine Learning

This involves software that can act without being programmed. Google’s semi-autonomous cars use machine learning, and while the algorithms are extremely advanced, they have problematic limitations. The cars are not allowed to pilot themselves on a road until a person drives them twice on the route, teaching the systems about that stretch.

The process must be faster, if not instantaneous. Otherwise, mystified autonomous cars will be handing back control or pulling to the side for programming too often. This technology is growing rapidly as the principles of machine learning are needed to analyze Big Data, which itself is a juggernaut in information technology.

The most visible example of machine learning is Watson, the IBM Corp. supercomputer that competed on the TV game show “Jeopardy”. IBM has partnered with auto-parts supplier Continental AG to develop products using Big Data.

System Interfaces

More of a science or art than a technology, interfaces will be surprisingly critical components. Familiar examples of interfaces include vehicle dashboards, keyboards, Web home pages and Apple Inc.’s Macintosh desktop.

Dash layouts differ and still they are standard enough that most drivers facing an unfamiliar one can drive off safely. This is less true of computers. For instance, the Mac and Microsoft Corp. Windows operating systems differ enough that even simple tasks can be frustrating for the uninitiated. Differences like that could be deadly in semi-autonomous vehicles.

It is worth mentioning OpenXC, an open-source movement for information systems in vehicles. Google’s Android operating system is open-source, and would most directly benefit from any success OpenXC enjoys.

Here are three primary interfaces in need of standardization:

Voice

Voice-recognition or machine-voice systems have to understand and pick out commands obscured by wind, engine noise and audio-system sounds as well as cacophony from outside the vehicle. On top of that, voice applications have to differentiate between commands and conversation. It is a challenging task.

Machine-voice feedback is likewise flawed. Humans must do the same noise filtering and then decipher what often is a marble-mouthed artificial voice.

None of this has stopped some major companies from trying to advance voice interfaces, however.

Ford Motor Co.’s voice-driven entertainment system, Sync, is based on Microsoft’s Windows Embedded operating system, and has been installed in more than 5 million cars. Launched in 2005, Sync depends on third-party software, including that of Nuance Communications Inc. Sync continues to get mixed reviews, but it is hard to argue with the practical experience Microsoft has in this niche.

Nuance owns the popular Dragon speech-recognition software. The company claims its apps are in 20 million cars, including Ford, Audi, BMW, Chrysler, General Motors, Hyundai and Toyota vehicles.

Even Siri, Apple’s voice-recognition personal-assistance app uses Nuance software (though Apple might be creating a new Siri without Nuance software). Apple is preparing to put its mobile operating system, iOS (with Siri), in a dozen or more 2015 model-year vehicles.

And Google Inc.’s voice search application, called Now, is a prominent part of its open-source Android operating system. In keeping with the open-source ethos, this feature is free to Android developers, something that could impede Nuance’s growth because that company licenses its software.

Gesture

Touch screens are impractical because cars jostle and drivers have to stare at them to interact. Concept screens that can simulate buttons are interesting but unproven.

Instead, Google, Apple and Microsoft have applied for gestural-interface patents, purchased gestural systems or both. Gestures are considered best, after voice, for interacting with vehicle systems. The problem here will be standardizing gestures.

Google researchers have filed for a patent (20130261871), for gesture-based automotive controls. It calls for pantomime commands and allows for the use of American Sign Language.

Microsoft also has relevant U.S. patent applications, including 20130155237, which covers in-vehicle gestural systems that use a portable device, presumably a smartphone or tablet. Its Kinect device is based on 3D units made by PrimeSense Ltd., which also sells direct.

Apple is rumored to be buying PrimeSense. But Apple’s own U.S. patent portfolio includes 008514221, which covers manipulating virtual 3D objects with gestures. Objects could be dials, button, switches, map pages and so forth.

On the periphery, Samsung Group’s Galaxy S4 has limited gesture commands using infrared sensors. It is not clear if Samsung, which sold a majority share of its abortive car-making venture in the 1990s to Renault SA, wants in on autonomous vehicles. Indeed, Samsung executives occasional state publicly that they want their company’s name removed from Renault Samsung Motors.

A little further afield, startup Elliptic Labs is pushing ultrasound systems to create gestural interfaces.

Heads-Up Displays

Using this technology, graphics and text are shown on the windshield or small glass panels. Heads-up displays, or HUDs, are an elegant way to inform drivers without requiring them to look from the road. Yet vehicle makers have been surprisingly timid about developing HUDs.

BMW Group, which first put a display in a production car in 2003, says it will install a new type of HUD in an unannounced future model of its Mini Cooper. It will be a small, clear panel on the dash immediately behind the steering wheel.

It will show information like vehicle speed but also navigation and warnings (including imminent collision). The information will be shown in such a way that it appears to be hanging in the air in front of the car, eliminating the need for drivers to refocus their eyes to see the data.

One startup, The NeXt Co., is crowd-funding an add-on system that integrates with a smartphone to show a smallish options screen on windshields. The HeadsUp understands a limited number of gestures and spoken commands.

While there are less sophisticated HUDs, including after-market GPS devices, and smartphone apps on the market, there is at least one third-party product with robust capabilities.

Electronic-entertainment manufacturer Pioneer Corp. announced in August 2013 its Carrozzeria Cyber Navi HUD navigation system. Only available in Japan, the Cyber Navi uses a pico, or handheld, projector made by MicroVision Inc. It is billed as creating an augmented-reality experience for drivers, displaying routes, attractions and roadside services.