Google outlined its artificial intelligence principles in a move to placate employees who were worried about their work and research winding up in U.S. weapons systems.
Guess what? It's already too late. There's no way that Google's open source approach and its headline principle to not allow its AI into weapons is going to mesh. Chances are fairly good that the technology already open sourced is in some fledgling weapon system somewhere. After all, TensorFlow and a bunch of other neural network tools are pretty damn handy.
Beyond our products, we're using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.
And that's all true. It's also true that any technology can be used for good and evil. And that's the real pickle to Google's AI approach, which sounds good in theory, but carrying it out is going to create a few issues.
What happens when an AI approach that's good is open sourced and used for evil? And who's definition of evil is it anyway?
Google's seven principles go as follows:
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles
That last item may be the trickiest. Google is supposed to gauge how likely its technology can be adapted for harm. Google's goal is worthwhile, but the bad guys can innovate well too.
Google concludes with how it won't pursue AI that can cause overall harm, be used in weapons, spy on people and violates human rights.
Of course, Google won't set out to do harm, but technologies are adapted for evil all the time. If Google wants to really keep a lid on AI for evil it may want to reconsider open source. Once the code is released publicly, Google can't put the AI genie back into the bottle or force anyone to adhere to its principles.
- Meet Norman, the world's first 'psychopathic' AI
- Singapore council to assess ethical use of AI, data
- Ex-Google chief Eric Schmidt: Elon Musk's views on AI are 'exactly wrong'
- How AI and robots are about to augment and accelerate human beings
- Human in the loop: Machine learning and AI for the people
- AI shouldn't be held back by scaremongering: Michael Dell