In a pre-print study, researchers from the United States and Japan were able to trick a self-driving car – the "victim vehicle," as they called it – into not seeing an obstacle in front of it by pointing a laser at its LIDAR. (Related: Self-driving cars are causing traffic incidents all over San Francisco.)
LIDAR stands for "Light Detection and Ranging." It is a sensor technology that can create a map of the environment around it. LIDAR sensors send out waves of infrared light around the car's surroundings and measure the time it takes for the light to bounce off the objects and return to the sensor, creating a three-dimensional map out of the data.
The "hack" works because a perfectly-timed laser aimed directly at a LIDAR can create a blind spot large enough for the infrared sensors to not see an object or a pedestrian in front of the autonomous vehicle.
"We mimic the LIDAR reflections with our laser to make the sensor discount other reflections that are coming in from genuine obstacles," noted University of Florida cybersecurity researcher and professor Sara Rampazzi. This "deletion" of data creates a false impression for the self-driving car that the road is safe to progress along, placing the car and the obstacle in a potentially dangerous collision course.
"The LIDAR is still receiving genuine data from the obstacle, but the data are automatically discarded because our fake reflections are the only ones perceived by the sensor," she continued.
In the test, Rampazzi and her colleagues did the "laser attack" from the side of the road, no closer than 15 feet away from the vehicle. They could also replicate the results up to 10 meters (32 feet) from the car. But Rampazzi noted that the device used in the hack had to be perfectly timed and it had to keep pace with the car's movements to be able to keep the laser pointing at the right spot.
It is feasible to produce similar results from a laser attack further away by using more sophisticated equipment than what the researchers deployed in the experiment.
The tech required for such an attack from a distance is "fairly basic," but the timing required to make the attack succeed makes such an attack unlikely to occur at the moment.
But if such an attack were to succeed, its consequences are potentially horrific, as it could result in the death of drivers, car passengers and pedestrians.
The researchers have already approached autonomous vehicle manufacturers to warn them of this possibility and have suggested changes to the LIDAR software to minimize this problem.
"Revealing this liability allows us to build a more reliable system [for self-driving cars]," noted Yulong Cao, a computer scientist from the University of Michigan and the study's first author. "In our paper, we demonstrate that previous defense strategies aren't enough, and we propose modifications that should address this weakness.
One of the suggestions proposed by the researchers would be to drastically change how LIDAR interprets raw data. Another suggestion involves manufacturers teaching their LIDAR software to look for the telltale signatures of a laser attack, including distinguishing the spoofed reflections caused by lasers.
Read more news about technology hiccups at Glitch.news.
Watch this clip from "The David Knight Show" discussing how the self-driving cars of companies like Tesla and Waymo are glitching dangerously and becoming a threat to drivers and pedestrians alike.
This video is from the channel The David Knight Show on Brighteon.com.
Laser technology used in self-driving cars can damage cameras and human eyes.
Autonomous vehicles to stop, roll down windows and unlock doors for law enforcement.
Waymo self-driving taxi goes rogue in Arizona, blocking traffic and escaping rescue crew.
Self-driving vehicle legislation held up by the question of who to blame in a crash.
Sources include: