Video: Google puts AI team's work to good use in Android P.
Google will reportedly not renew its contract with the Pentagon to develop artificial intelligence for drone video analysis.
The decision follows objections by thousands of staff worried that Google's technology could be used to kill people.
Google Cloud, headed up by Diane Greene, won a contract with the Pentagon dubbed Project Maven to develop AI that can help recognize people and objects captured in drone footage.
Greene told employees in a Friday staff meeting that it won't bid to renew that contract, which expires in 2019, according to Gizmodo's sources.
The contract has turned into a PR headache for Google and has prompted criticism from thousands of employees who believe it betrays the company's famous 'Don't be evil' motto and will jeopardize it's already uphill battle maintaining the public's trust.
Over 4,000 staff have signed a petition calling on Google to quit its work and about a dozen employees have quit due to its involvement, despite Google's insistence that the technology was for "non-offensive uses only".
See: Special report: How to implement AI and machine learning (free PDF)
The contract was reportedly $9m in value. However, leaked emails show that senior execs at Google were keen on the project because it could help Google Cloud win much larger deals in future.
The contract was crucial to Google Cloud Platform gaining a key US government FedRAMP authorization, which would set it up for future government work worth potentially billions of dollars. Project Maven also had the potential to expand to a $250m deal.
The change in direction also follows a New York Times report last week based on leaked emails from Fei-Fei Li, Google's chief scientist for AI at Google Cloud.
After winning Project Maven, Li warned staff involved in Google's defense and intelligence sales unit to be extremely cautious about how they communicated it publicly, urging them to convey it as a cloud infrastructure project.
"Avoid at ALL COSTS any mention or implication of AI. Weaponized AI is probably one of the most sensitized topics of AI -- if not THE most. This is red meat to the media to find all ways to damage Google," she wrote, concerned that the media would interpret the work as Google "secretly building AI weapons" for the defense industry.
See: Google Analytics 101: Executive's guide to measuring business data
However, Greene opted against publicizing the deal, which became public after staff protested the work on Google's internal message board.
While Google claimed only to be providing its open-source TensorFlow APIs to the Pentagon, emails seen by Gizmodo show that Google was planning to build a 'Google Earth-like' surveillance system for Pentagon analysts, enabling highly-accurate real-time views over people, vehicles, crowds, and land features of an entire city.
Early efforts by Google proved its system could detect vehicles that human analysts missed.
Google now also plans to release a new document regarding ethical principles about its use of AI next week.
Previous and related coverage
Google employees protest: 'Stop work on AI for Pentagon drone video analysis'
Google workers call on the company to cease developing AI for warfare.
Google employee protest: Now 'Googlers are quitting' over Pentagon drone project
The number of employees against Google's Project Maven role grows, and they're now backed by a big group of academics.
Google erases 'Don't be evil' from code of conduct after 18 years
Google quietly removes its famous 'Don't be evil' motto, which used to figure in the opening to its code of conduct.
Ex-Google CEO Schmidt's warfare warning: We need AI ground rules for Pentagon work
But Eric Schmidt urges DoD to do more AI programs like the one that sparked protests from Google employees.