News Analysis: Uber's Self-Driving Fatality Prods Vehicle Safety, Foreign Policies

Uber self-driving Volvo (Credit: Dllu, via Wikimedia Commons)

March 22, 2018      

The fallout from this week’s Uber self-driving fatal accident goes beyond finding fault of the company, the emergency driver, or the pedestrian. Municipal, state, and national governments are now scrambling to make sense of vehicle safety regulations (or the lack thereof) overseeing this fast-moving technology.

More details continue to emerge from the March 18, 2018, accident in Tempe, Ariz., where 49-year-old Elaine Herzberg was struck by an Uber self-driving car.

“I suspect preliminarily it appears that the Uber would likely not be at fault in the accident, either,” said Tempe Police Chief Sylvia Moir, in an interview with the San Francisco Chronicle. Since those comments were made, the police department backed off a bit, issuing a statement saying that the “Tempe Police Department does not determine fault in vehicular collisions.”

However, other reports state that an examination of the technology on board the self-driving car found that the autonomous systems “may not have realized it was detecting a person.”

In addition, a video shows that the human driver placed in the vehicle for emergency situations was looking down before the car hit the pedestrian.

Following the crash, Uber called off testing its self-driving vehicles. Even other car companies, like Toyota, are doing the same. Local governments are also stepping in. The city of Boston has asked nuTonomy and Optimus Ride to stop testing in the city, at least for now.

The Uber crash was the first known U.S. fatality caused by a self-driving car — there have been several accidents with Tesla cars in “autopilot” mode — but unfortunately, it’s unlikely to be the last. As self-driving cars, buses, trucks and other vehicles are rolled out, governments must have policies around what happens when someone gets hurt or killed by the technology.

Public policy around vehicle safety, liability

Around the world, some governments have taken steps to create self-driving vehicle safety laws. The U.K. is conducting a three-year review into self-driving cars before they are allowed on British roads.

In Germany, the government unveiled the world’s first ethics guidelines for autonomous cars. India’s top transportation official wants to ban self-driving cars in an effort to protect jobs.

It’s unlikely that countries will completely ban self-driving vehicles, so governments and companies need to work together to understand who is liable when a self-driving car has an accident. Both Audi and Volvo have said they will accept liability.

California eliminated a rule that would have allowed automakers to walk away from liability if they suspected car owners had not maintained their car properly. Creating policies around liability will force governments to think differently.

The more people think about liability with self-driving vehicles, more questions get raised. For example, is it fair to make self-driving automakers liable when humans are responsible for a majority of the accidents? Are there circumstances where nobody is liable — the so-called Act of God clause?

Strict policies around business liability need to be created. If a self-driving truck crashes and $500,000 worth of goods are destroyed, who’s liable? Could it be the automaker, the insurance company, or the city (if the road or poor lighting is faulted)? These questions and hundreds of others must be answered by companies and governments.

Vehicle safety and cybersecurity prompt foreign policy

Policies around the security of self-driving cars also need to be created. Foreign governments and groups could, in theory, hijack self-driving cars and use them to incite terror. What happens if it’s revealed that a vehicle didn’t crash by accident, but instead was hacked and deliberately targeted the pedestrian or the vehicle’s passenger? What if it’s then linked to a foreign group or government, which did this to send a message?

As self-driving vehicles enter roads around the world, governments need to be clear on how they will approach the hacking of these vehicles and other “technological terrorism” events – especially if they come from foreign groups or governments.

Part of the U.K. examination into self-driving cars includes a look into cybersecurity and what to do with hackers who get caught. Shortly after the examination was announced, the government warned that Russia may launch a massive cyberattack against the British power grid, as tensions heated up over the poisoning of a former Russian spy in the U.K.

How would the U.K. respond to a Russian cyberattack in 2023 that involved hacking and crashing thousands of self-driving cars, buses, and trucks? How would the U.K. deal with terrorists hacking into such vehicles and remotely controlling them to run over pedestrians?

Will governments be liable in the next accident?

For the moment, Uber and other companies have halted their self-driving car tests as they figure out what happened and who is liable. Apologies will be sent, lawsuits may happen, and vehicle safety promises will be made to prevent future accidents. This behavior, to a large part, will be influenced by what governments require.

Now, more than ever, governments need to start implementing concrete policies around self-driving cars, before the next spate of accidents occur. Otherwise, regulatory inaction would be blamed along with self-driving car companies.