X
Innovation

Google's do no evil AI style likely to clash with open source approach

Google said its AI won't be used in weapons or for harm, but when you open source projects the code is there to be used for both good and evil (like any other tool by the way).
Written by Larry Dignan, Contributor

Google outlined its artificial intelligence principles in a move to placate employees who were worried about their work and research winding up in U.S. weapons systems.

Guess what? It's already too late. There's no way that Google's open source approach and its headline principle to not allow its AI into weapons is going to mesh. Chances are fairly good that the technology already open sourced is in some fledgling weapon system somewhere. After all, TensorFlow and a bunch of other neural network tools are pretty damn handy.

In a blog post, outlining Google's approach going forward--think 'do no evil AI style'--CEO Sundar Pichai gave the company's open source efforts props high up. He said:

Beyond our products, we're using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.

And that's all true. It's also true that any technology can be used for good and evil. And that's the real pickle to Google's AI approach, which sounds good in theory, but carrying it out is going to create a few issues.

Google employee protest: Now 'Googlers are quitting' over Pentagon drone project

What happens when an AI approach that's good is open sourced and used for evil? And who's definition of evil is it anyway?

Google's seven principles go as follows:

  • Be socially beneficial
  • Avoid creating or reinforcing unfair bias
  • Be built and tested for safety
  • Be accountable to people
  • Incorporate privacy design principles
  • Uphold high standards of scientific excellence
  • Be made available for uses that accord with these principles

That last item may be the trickiest. Google is supposed to gauge how likely its technology can be adapted for harm. Google's goal is worthwhile, but the bad guys can innovate well too.

What is AI? Everything you need to know about Artificial Intelligence | What is machine learning? Everything you need to know

Google concludes with how it won't pursue AI that can cause overall harm, be used in weapons, spy on people and violates human rights.

Of course, Google won't set out to do harm, but technologies are adapted for evil all the time. If Google wants to really keep a lid on AI for evil it may want to reconsider open source. Once the code is released publicly, Google can't put the AI genie back into the bottle or force anyone to adhere to its principles.

Related:

Editorial standards