X
Innovation

Can Alphabet put "AI first" while considering the ethical implications?

Alphabet-owned Google is making AI central to its mission; its sister company DeepMind has started a group to grapple with ethical impact of AI.
Written by Stephanie Condon, Senior Writer

With the launch of the Pixel 2 and Pixel 2 XL phones, Google on Wednesday showcased just one way it's making artificial intelligence a key component of its products and services. Google since 2016 has followed an "AI first" strategy, betting that intelligence will matter far more than devices.

With this level of focus and investment in AI, Google's parent company Alphabet has plenty of incentive to avoid the backlash that can come from AI deployments gone awry.

Just a day earlier, DeepMind -- Alphabet's artificial intelligence division -- launched DeepMind Ethics & Society, a research unit within the company with the intent "to explore and better understand the real-world impacts of AI."

"Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work," DeepMind said in a blog post, penned by Verity Harding and Sean Legassick, the "co-leads" of the new research unit. "At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes."

The new group plans to publish original research, on its own and in collaboration with others. Its work will be guided by "key research themes" such as "economic impact" and "AI morality and values." The group also intends to follow five core principles: That AI should have social benefit, that its research should be rigorous and evidence-based, the group should be transparent, its work should be interdisciplinary and include diverse viewpoints and it should be collaborative and inclusive.

"Our research will be available online, and contribute to the evidence-base for meaningful debate about the real-world impacts of AI," the group says on its FAQ page.

The research unit's principles were developed with input from independent "fellows," who include academics like Nick Bostrom, founding director of Oxford University's Future of Humanity Institute.

This latest effort to explore the ethical implications of AI follows the launch of several other initiatives, some more independent than others. Last year, in fact, Google and DeepMind joined Amazon, Facebook, IBM, and Microsoft to form a not-for-profit organization focused on building AI best practices and publishing research under an open license in areas such as ethics, fairness, and inclusivity. Additionally, Carnegie Mellon last year announced the establishment of a new research center focused on the ethics of artificial intelligence; the university has ties to Google, as well as other for-profit technology companies like Uber.

While DeepMind says the research unit is committed to transparency, the company has come under fire recently for some opaque behavior -- a government agency in the UK this past summer ruled that London's Royal Free hospital failed to comply with the Data Protection Act when it gave DeepMind access to personal data from 1.6 million patients. DeepMind's access to the data only came to light following a New Scientist report in 2016.

Editorial standards