Hot on the heels of my CES Report is the release of the latest article from Chris Urmson on The View from the Front Seat of the Google Car. Chris heads engineering on the project (and until recently led the entire project.)
Chris reports two interesting statistics. The first is “simulated contacts” – times when a safety driver intervened, and the vehicle would have hit something without the intervention:
There were 13 [Simulated Contact] incidents in the DMV reporting period (though 2 involved traffic cones and 3 were caused by another driver’s reckless behavior). What we find encouraging is that 8 of these incidents took place in ~53,000 miles in ~3 months of 2014, but only 5 of them took place in ~370,000 miles in 11 months of 2015. (There were 69 safety disengages, of which 13 were determined to be likely to cause a “contact.”)
The second is detected system anomalies:
There were 272 instances in which the software detected an anomaly somewhere in the system that could have had possible safety implications; in these cases it immediately handed control of the vehicle to our test driver. We’ve recently been driving ~5300 autonomous miles between these events, which is a nearly 7-fold improvement since the start of the reporting period, when we logged only ~785 autonomous miles between them. We’re pleased.
Let’s look at these and why they are different and how they compare to humans.
The “simulated contacts” are events which would have been accidents in an unsupervised or unmanned vehicle, which is serious. Google is now having one once every 74,000 miles, though Urmson suggests this rate may not keep going down as they test the vehicle in new and more challenging environments. It’s also noted that a few were not the fault of the system. Indeed, for the full set of 69 safety disengagements, the rate of those is actually going up, with 29 of them in the last 5 months reported.
How does that number compare with humans? Well, regular people in the USA have about 6 million accidents per year reported to the police, which means about once every 500,000 miles. But for some time, insurance companies have said the number is twice that, or once every 250,000 miles. Google’s own new research suggests even more accidents are taking place that go entirely unreported by anybody. For example, how often have you struck a curb, or even had a minor touch in a parking lot that nobody else knew about? Many people would admit to that, and altogether there are suggestions the human number for a “contact” could be as bad as one per 100,000 miles.
Which would put the Google cars at close to that level, though this is from driving in simple environments with no snow and easy California driving situations. In other words, there is still some distance to go, but at least one possible goal seems in striking distance. Google even reports going 230,000 miles from April to November of last year without a simulated contact, a (cherry-picked) stretch that nonetheless matches human levels.
For the past while, when people have asked me, “What is the biggest obstacle to robocar deployment, is it technology or regulation?” I have given an unexpected answer – that it’s testing. I’ve said we have to figure out just how to test these vehicles so we can know when a safety goal has been met. We also have to figure out what the safety goal is.
Various suggestions have come out for the goal: Having a safety record to match humans. Matching good humans. Getting twice or even 10 times or even 100 times as good as humans. Those higher, stretch goals will become good targets one day, but for now the first question is how to get to the level of humans.
One problem is that the way humans have accidents is quite different from how robots probably will. Human accidents sometimes have a single cause (such as falling asleep at the wheel) but many arise because 2 or more things went wrong. Almost everybody I talk to will agree a time has come when they were looking away from the road to adjust the radio or even play with their phone, and they looked up to see traffic slowing ahead of them, and quickly hit the brakes just in time, narrowly avoiding an accident. Accidents often happen when luck like this runs out. Robotic accidents will probably mostly come from one single flaw or error. Robots doing anything unsafe, even for a moment, will be cause for alarm and the source of the error will be fixed as quickly as possible.
Safety anomalies
This leads us to look at the other number – the safety anomalies. At first, this sounds more frightening. They range from 39 hardware issues and anomalies to 80 “software discrepancies” which may include rarer full-on “blue screen” style crashes (if the cars ran Windows, which they don’t). People often wonder how we can trust robocars when they know computers can be so unreliable. (The most common detected fault is a perception discrepancy, with 119. It is not said, but I will presume these will include strange sensor data or serious disagreement between different sensors.)