Much press has been made over Jonathan Petit’s recent disclosure of an attack on some LIDAR systems used in robocars. I saw Petit’s presentation on this in July, but he asked me for confidentiality until they released their paper in October. However, since he has decided to disclose it, there’s been a lot of press, with truth and misconceptions.
There are many security aspects to robocars. By far the greatest concern would be compromise of the control computers by malicious software, and great efforts will be taken to prevent that. Many of those efforts will involve having the cars not talk to any untrusted sources of code or data which might be malicious. The car’s sensors, however, must take in information from outside the vehicle, so they are another source of compromise.
There are ways to compromise many of the sensors on a robocar. GPS can be easily spoofed, and there are tools out there to do that now. (Fortunately real robocars will only use GPS as one clue to their location.) Radar is also very easy to spoof – far easier than LIDAR, agrees Petit – but their goal was to see if LIDAR is vulnerable.
The attack is a real one, but at the same time it’s not, in spite of the press, a particularly frightening one. It may cause a well designed vehicle to believe there are “ghost” objects that don’t actually exist, so that it might brake for something that’s not there, or even swerve around it. It might also overwhelm the sensor, so that it feels the sensor has failed, and thus the car would go into a failure mode, stopping or pulling off the road. This is not a good thing, of course, and it has some safety consequences, but it’s also a fairly unlikely attack. Essentially, there are far easier ways to do these things that don’t involve the LIDAR, so it’s not too likely anybody would want to mount such an attack.
Indeed, to do these attacks, you need to be physically present, either in front of the car (or perhaps to the side of the road) and you need a solid object that’s already in front of the car, such as the back of a truck that it’s following. This is a higher bar than attacks which might be done remotely (such as computer intrusions) or via radio signals (such as with hypothetical vehicle-to-vehicle radio, should cars decide to use that tech.)
Here’s how it works: LIDAR works by sending out a very short pulse of laser light, and then waiting for the light to reflect back. The pulse is a small dot, and the reflection is seen through a lens aimed tightly at the place the pulse was sent. The time it takes for the light to come back tells you how far away the target is, and the brightness tells you how reflective it is, like a black-and-white photo.
To fool LIDAR, you must send another pulse that comes from or appears to come from the target spot, and it has to come in at just the right time, before the real pulse from what’s really in front of the LIDAR comes in.
The attack requires knowing the characteristics of the target LIDAR very well. You must know exactly when it is going to send its pulses before it sends them, and thus precisely (to the nanosecond) when a return reflection (“return”) would arrive from a hypothetical object in front of the LIDAR. Many LIDARs are quite predictable. They scan a scene with a rotating drum, and you can see the pulses coming out, and know when they will be sent.
Laser Pointer Tricks LIDAR
Shining the laser pointer at a self-driving car so that it is picked up by the LIDAR system could trick the car into thinking something is directly ahead of it, thus forcing it to slow down. Alternatively, a hacker could overwhelm it with spurious signals, forcing the car to remain stationary for fear of hitting phantom obstacles
Shining the laser pointer at a self-driving car so that it is picked up by the LIDAR system could trick the car into thinking something is directly ahead of it, thus forcing it to slow down. Alternatively, a hacker could overwhelm it with spurious signals, forcing the car to remain stationary for fear of hitting phantom obstacles.
In the simplest version of the attack, the LIDAR is scanning something like a wall in front of it. There are not such walls on the highway, but there are things like signs, bridges and the backs of trucks and some cars.
The attack laser sends a pulse of light at the wall or other object, but does it so that it will hit the wall in the right place, but earlier than the real pulse from the target LIDAR. This pulse will then bounce back, go in the lens of the LIDAR, and make it appear that there is something closer than the wall. (The legitimate pulse will also bounce back and arrive later, but many LIDAR designs might ignore that second pulse to return.)
The attack pulse does not have to be bright, it wants to be similar to the pulse from the LIDAR so that the reflection looks the same. The attack pulse must be at the very same wavelength, and launched at just the right nanosecond.
If you send out lots of pulses without good timing, you won’t create a fake object, but you can create noise. That noise would blind the LIDAR about the wall in front of it, but would be very obviously noise. Petit and crew did tests of this noise attack and they were a success.
The fancier attack knows the timing of the target LIDAR perfectly, and paints a careful series of pulses so it looks like a complex object is present, closer than the wall. They were able to do this on a small scale.
This attack can only make the ghost object appear in front of another object, like another vehicle, or perhaps a road sign or bridge. It could also be reflected off the road itself. The ghost object, being closer than the road in question, would appear to be higher than the road surface.