How to hack self-driving cars with a laser pointer

A security researcher has explained how a simple, hand-held laser can skew self-driving car sensors and place passengers at risk.
Written by Charlie Osborne, Contributing Writer

A popular laser ranging system used by self-driving cars can be tampered with through little more than a handheld laser pointer, a researcher has claimed.

Self-driving and connected cars are the latest trend in the automotive industry. Ranging from driver assistance projects such as motorway driving automatic control to rear cameras and sensors which tell you when you're about to hit the kerb, the research and development surrounding the concepts is varied.

However, the holy grail pursued by companies including Google, Nissan, Hyundai and Ford is the fully autonomous vehicle -- which relies on sensors to drive at the correct speeds and avoid obstacles without human control.

Security researcher Jonathan Petit from Security Innovation says the use of the LIDAR system in these vehicles makes computer systems controlling aspects of the car, such as braking and speed, vulnerable to outside influences.

Using the same principles as laser, the LIDAR (Light Detection and Ranging) system is a remote sensing method used to create a "map" of the environment. By detecting obstacles and measuring distances through light pulses in the form of a pulsed laser, LIDAR can create a 3D map and track changes in real-time -- which has allowed the technology to be adapted for self-driving car research and development.

As an example, the LIDAR system -- comprising of a laser, scanner and GPS receiver -- is reportedly used as part of Google's self-driving car project.

In a paper due to be presented at Black Hat Europe in November, Petit says a simple $60 "off the shelf" setup including a laser pointer and pulse generator -- or Raspberry Pi, should you prefer -- is all that's needed to send self-driving car sensors haywire. The researcher says the system could be used to trick self-driving car sensors into finding obstacles that do not exist, forcing the vehicle to slow down -- or potentially by bombarding the car with enough signals it would freeze and come to a halt in order to avoid colliding with anything.

Speaking to Spectrum, Petit said:

"I can take echoes of a fake car and put them at any location I want [..] and I can do the same with a pedestrian or a wall."

The researcher recorded pulses from a commercial LIDAR unit, of which the signals were then fired back -- allowing Petit to create ghost obstacles including cars, walls or people in a range of 20 to 350 meters, confusing the LIDAR system and contorting the sensor's readings. Petit explained:

"I can spoof thousands of objects and basically carry out a denial of service attack on the tracking system so it's not able to track real objects."

The attack works up to 100 meters, and while the researcher acknowledges the attack is currently limited to one specific unit, the demonstration does highlight that bugs in self-driving vehicles which could cause accidents are a critical issue. We need to start thinking about security in the field now, rather than become faced with the constant patch problems that software vendors and application developers need to deal with on a daily basis.

See also: Three top tips to keep connected cars safe from hackers

Ways to tamper with self-driving vehicle systems are the next evolution of security problems we are already seeing. This week, Fiat Chrysler issued a recall notice for roughly 8,000 SUVs which may be affected by a software flaw allowing car systems to be remotely controlled by hackers. The Jeeps, on top of 1.4 million vehicles already recalled, are thought to be impacted by a vulnerability within the Uconnect connected car system.

Cybersecurity reads which belong on every bookshelf

Read on: Top picks

Editorial standards