X
Innovation

Trustworthy AI: EU releases guidelines for ethical AI development

The European Union hopes to encourage AI applications which support human rights, rather than removes them.
Written by Charlie Osborne, Contributing Writer

The European Union has published a set of guidelines which the organization hopes will promote the creation of trustworthy and ethical artificial intelligence (AI) applications.

On Monday, the European Union published guidelines which outline "essential steps" that developers should take, based on a draft proposal published back in December 2018 and revised due to over 500 comments being received in feedback.

The proposal enters the gray world of how far is too far when it comes to AI. 

Computer vision, deep learning, face and object recognition, natural language processing -- all of the technologies assigned to the AI umbrella can be valuable in both consumer and enterprise applications, but their use in other areas has already proven to be a cause for concern.

Amazon has come under fire for offering its Rekognition platform to law enforcement and government entities, and Google closed its ethical AI panel after less than two weeks following controversy swirling around the committee's members.

Law enforcement, military, and government applications of AI which may result in unmanned warfare, an unacceptable level of surveillance, censorship, monitoring, and the biased recognition of criminal suspects have all become recent topics of discussion across business and academia alike.

CNET: Facebook's AI helps block or remove 1 million accounts each day

In the hopes of steering AI towards beneficial usage rather than applications which strip or stifle human rights, the EU's revised standards have now been published and are outlined below:

Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.

Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

Transparency: The traceability of AI systems should be ensured.

Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The proposal also suggests that particular attention should be paid to protect vulnerable individuals, potentially including the disabled, children, and the elderly. The EU further says that developers should accept that "while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate."

TechRepublic: How to implement AI and machine learning 

While the guidelines are far from legally binding, should they be considered acceptable by developers, academics, human rights groups, and businesses, we may find that they provide the foundation for EU legislation in the future.

A pilot program will be launched later this year, involving stakeholders, which will review the proposal more thoroughly and provide feedback. The EU also wants businesses interested in participating to join the European AI Alliance.

See also: Enterprise AI in 2019: What you need to know

"Today, we are taking an important step towards ethical and secure AI in the EU," said Mariya Gabriel, EU Commissioner for Digital Economy and Society. "We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia, and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."

Innovative artificial intelligence, machine learning projects to watch

Previous and related coverage

Editorial standards