X
Business

Feds give Google's autonomous vehicles a win: Now about those liability issues

Remember that IT circle of pain and fingerpointing when complex technology projects go awry? Just imagine what'll happen when your friendly AI driver is in a crash. Who takes the liability hit?
Written by Larry Dignan, Contributor
google-self-driving-car.png

Google's self-driving autonomous vehicle system can be considered a driver, according to the National Highway Traffic Safety Administration. Although the move furthers the efforts of autonomous vehicles it also opens up a series of questions at the intersection of the Internet of things, business and liability.

CNET's Roadshow has the details from the NHTSA regulatory take and the letter between Google and the Feds is here. But in a nutshell, the following was worked out.

  • Google sought clarification on how the company's self-driving cars could meet Federal Motor Vehicle Safety Standards, which govern how automobiles are built.
  • The NHTSA determined that Google's self-driving cars would be considered a driver for the vehicle safety standards.
  • Brake pedal positioning and sensors were addressed in a way that conforms with federal safety standards.
  • For Google, the Federal declaration is a win and could put autonomous vehicles on the road faster. Overall, it's a nice step for artificial intelligence that an autonomous system can be declared a driver.

But here's where things get sticky: What happens when there is an accident? You can give an algorithm a ticket. You can't sue an algorithm. And pinpointing blame to any autonomous vehicle wreck is going be the equivalent of an enterprise resource planning implementation disaster. The customer will point at the vendor. The vendor points at the customer and the integrator. Every supplier in the technology chain will take its lumps. Everyone is to blame, but not really.

Welcome to the world of complex systems. Autonomous vehicles are going to be a complex technology, business, legal and regulatory system.

Richard Windsor, an analyst at Edison Investment Research, noted that 2020 is a deadline for automakers to have autonomous cars on the road. That timeline is likely to be pushed off to 2030. Why? Windsor said:

Liability is the biggest problem that faces autonomous driving as sending an algorithm to prison is not a practical option. When an autonomous vehicle crashes - and they will - the question arises as to who is responsible for the crash.

If a driver is asleep and the AI is driving the car home is that person to blame for an accident? If the human is to blame then why would you bother with an autonomous vehicle?

What if the auto industry is liable for its systems? Maybe the algorithms are developed on the cheap. Perhaps the sensors are off. Windsor said that the automotive industry may become the riskiest in the world. Would Google take on that risk?

Should the suppliers be liable in the event of an autonomous vehicle accident? Windsor added:

If the liability is to fall upon the supplier, then it is almost certain to claim that the auto maker didn't install the software or component properly or otherwise made modifications that caused it to fail. This is one of the biggest problems when systems get complex is that there is a combinatory explosion of possible outcomes in any one scenario.

It is clear that in any one fatal incident, the blame game has the potential to go on for years and there are likely to be fatal incidents on a daily basis - 32,719 people died in 2013 in road vehicle crashes in the US.

Anyone who has been through the enterprise IT circle of pain knows what Windsor is talking about.

Assuming the technology liability issues can be worked out there are multiple business items to ponder. How will insurers react to autonomous vehicles? Will rates go up? And then there are the cultural issues. Bottom line: Google notched a nice milestone, but there's still a long way to go before an algorithm takes your kid to preschool.

Editorial standards