X
Business

Google pulls plug on AI ethics group only a few weeks after inception

The current climate has been blamed -- not due to AI, but rather its members.
Written by Charlie Osborne, Contributing Writer

Google has axed a group established to debate the ethical implications of artificial intelligence (AI) only a few weeks after its creation.

The emergence of AI, machine learning (ML), and associated technologies including image and face recognition, Internet of Things (IoT) voice-based assistants in our homes, predictive analytics, and remote military applications have raised questions related to the ethical use of such technologies.

The Advanced Technology External Advisory Council (ATEAC) was formed less than two weeks ago by Google to debate these issues.

"This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work," the company said at the time.

ATEAC was intended as a means to expand the company's AI Principles, published in 2018. Among these principles for ethical AI applications is the stipulation of being "socially beneficial," the eradication of unfair bias, the requirement to be both secure and accountable, and to be based on scientific principles.

Google added that it would not develop AI for weapons or applications that would cause harm, but also said that the company would "work with governments and the military in many other areas."

Also: CSIRO promotes ethical use of AI in Australia's future guidelines

In an updated blog post on ATEAC, Google's SVP of Global Affairs Kent Walker said that it has become clear that in the "current environment, ATEAC can't function as we wanted."

"We're ending the council and going back to the drawing board," Google said. "We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics."

The ethical quandaries AI has created is not at the root of the reason the council has closed after such a short time period. Instead, it is controversy surrounding two out of eight members of the ethics committee.

TechRepublic: How to implement AI and machine learning

According to Vox, the ATEAC closure is in part due to a petition signed by thousands of employees to remove one of the members of the board, Kay Coles James. She is the Heritage Foundation's president and courted controversy over her organization's views on climate change, as well as personal comments made relating to the transgender community.

screenshot-2019-04-05-at-11-30-50.png

Another member, Dyan Gibbens, is the CEO of drone company Trumbull Unmanned and the appointment has reopened old wounds relating to how deeply Google's AI technologies should become entrenched in potential military applications.

Alessandro Acquisti, an information security professor at Carnegie Mellon, resigned as he did not believe the board was the "right forum" to work in.


Must read


The first try at creating a board to debate the ethical implications of AI has failed, but this will hopefully not be Google's only attempt to do so. As the technology evolves there are important questions which must be both debated and answered to prevent privacy-destroying applications and potentially unmanned warfare becoming common factors in the future.

CNET: Amazon, Google, AI and us: Are we too close for comfort?

In related news this week, Apple has reportedly poached one of Google's best AI experts, Ian Goodfellow. The AI specialist has joined Apple as a Director of Machine Learning in the Special Projects Group.

Innovative artificial intelligence, machine learning projects to watch


Editorial standards