X
Innovation

IEEE publishes draft report on 'ethically aligned' AI design

More than 100 experts in artificial intelligence (AI) and ethics are attempting to advance public discussion surrounding the ethical considerations of AI.
Written by Stephanie Condon, Senior Writer

As the tech world pushes forward with the development of artificial intelligence, the Institute of Electrical and Electronics Engineers (IEEE) is asking everyone to pause and consider the ethical ramifications. It's kicked off that process by publishing the first draft of a document that explores a range of ethical challenges posed by AI.

The document, "Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems (AI/AS)," was drafted by committees of the IEEE's Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. More than 100 global thought leaders and experts on AI, ethics, and related issues are contributing to the initiative.

With the first draft published, the IEEE is inviting public comment on the document.

"We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles," the report's executive summary says. "AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems... By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age."

The document includes eight sections, each drafted by a different committee of the IEEE Global Initiative. It begins with general principles articulated by a committee that considered the "high-level ethics concerns" that apply to all types of AI/AS. Those principles include "human benefit" (ensuring that AI does not infringe human rights), responsibility, transparency, and education and awareness.

The second section tackles the challenge embedding values into autonomous intelligence systems (AIS). It acknowledges various challenges, such as the fact that values are not always universal and that AIS may be subject to multiple, conflicting values.

Next, the document explores methodologies to guide ethical research and design. The fourth section considers the safety and beneficence of artificial general intelligence and artificial superintelligence. This advanced level of AI "may have a transformative effect on the world on the scale of the agricultural or industrial revolutions, which could bring about unprecedented levels of global prosperity," the report says. At the same time, as AI systems become more capable, "unanticipated or unintended behavior becomes increasingly dangerous."

The next section addresses the fundamental need for people to define, access, and manage their personal data. After that, the report considers the challenges of reframing autonomous weapons systems, recommending practices such as audit trails that guarantee accountability over these systems.

The seventh section explores economic and humanitarian considerations, such as employment issues that arise with the development of AI. The last section considers the role of the law, such as legal ways to improve accountability for AI.

The IEEE is asking for input on the document by March 6, 2017. Meanwhile, other organizations are also broaching the ethical questions arising with autonomous systems. Carnegie Mellon University recently announced it's establishing a new research center focused the ethics of AI. Additionally, Google, Facebook, Amazon, Microsoft and IBM recently formed a not-for-profit organization to educate the public and open up dialogue about AI.

Editorial standards