X
Innovation

Google says it won't build AI for weapons

In a set of principles laid out to guide its development of artificial intelligence, Google also said it won't build AI for surveillance that violates "internationally accepted norms."
Written by Stephanie Condon, Senior Writer

Weeks after facing both internal and external blowback for its contract selling AI technology to the Pentagon for drone video analysis, Google on Thursday published a set of principles that explicitly states it will not design or deploy AI for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."

Google committed to seven principles to guide its development of AI applications, and it laid out four specific areas for which it will not develop AI. In addition to weaponry, Google said it will not design or deploy AI for:

  • Technologies that cause or are likely to cause harm.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

While Google is rejecting the use of its AI for weapons, "we will continue our work with governments and the military in many other areas," Google CEO Sundar Pichai wrote in a blog post. "These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."

Google's contract with the Defense Department came to light in March after Gizmodo published details about a pilot project shared on an internal mailing list. Thousands of Google employees petitioned the contract and some quit in protest. Google then reportedly told its staff it would not bid to renew the contract, for the Pentagon's Project Maven, after it expires in 2019.

In his blog post, Pichai said the seven principles laid out Thursday "are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions."

The seven principles state that AI should: be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available for uses that accord with these principles.

While Google's work with the Pentagon came under scrutiny, other major companies are also facing questions about the ethical principles guiding their AI development: Amazon, for instance, has been called out by the ACLU for providing facial recognition tools to law enforcement.

Previous and related coverage:

Editorial standards