X
Innovation

Can AI really be ethical and unbiased?

Artificial intelligence will change industries and our lives and a variety of camps will be talking about combining algorithms and ethics. The questions surrounding AI will proliferate.
Written by Larry Dignan, Contributor

The Obama administration's report on the future of artificial intelligence mentions the word "ethics" 11 times and "bias" 23 times.

What unclear about the future of artificial intelligence (AI) is whether you can put ethics into an algorithm and test it. It's also unclear whether you can eliminate bias whether it's embedded into AI systems on purpose or by accident.

There's also a question about ethics and bias on the global stage. After all, whose ethics are we humans going to program? Welcome to what may be this century's biggest technological advance and all the societal questions that come with it across the public and private sectors.

Previously: Obama's report on the future of artificial intelligence: The main takeaways | We aren't getting ready for the AI revolution: That needs to change, and fast | What race is your AI? Obama discussion adds politics to tech

AI, which will inevitably cost (and create) jobs as it automates various tasks, is going to be a hot button issue for decades to come. Introducing ethics and bias into the equation is going to make AI even trickier to implement.

Let's ponder two excerpts from the Obama report on AI:

Ethics:

Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

Fair enough. These issues can be either separate classes or woven throughout an AI-focused major. Obama's report does note automation in the military as well as AI as a technology used for good or evil. Again, it's highly unlikely that ethics are going to be standard across countries or even companies.

Bias:

Commenters focused on the importance of AI being produced by and for diverse populations. Doing so helps to avoid the negative consequences of narrowly focused AI development, including the risk of biases in developing algorithms, by taking advantage of a broader spectrum of experience, backgrounds, and opinions. These topics were also covered extensively during the public workshops. There is some research on the effects of a lack of diversity in the AI workforce on AI technology design and on the societal impacts of AI. This rich body of research is growing but still lagging behind the literature on broader AI workforce development needs. More research would be beneficial...

The difficulty of understanding machine learning results is at odds with the common misconception that complex algorithms always do what their designers choose to have them do, and therefore that bias will creep into an algorithm if and only if its developers themselves suffer from conscious or unconscious bias. It is certainly true that a technology developer who wants to produce a biased algorithm can do so, and that unconscious bias may cause practitioners to apply insufficient effort to preventing bias. In practice, however, unbiased developers with the best intentions can inadvertently produce systems with biased results, because even the developers of an AI system may not understand it well enough to prevent unintended outcomes.

When you combine the two it's easy to see how AI is going to go from something interesting like Alexa, Google, Cortana, Facebook and IBM's Watson to a technology that's going to become the focus of Washington in a hurry. AI will be hailed and vilified in equal doses. Rest assured there will be AI for good as well as evil.

The IEEE has formed an initiative to examine ethical considerations in the design of autonomous systems including robotics, artificial intelligence, computational intelligence, machine learning, deep learning, cognitive computing, affective computing, and algorithmically-based programs overall.

George Thiruvathukal, IEEE member and professor of computer science at Loyola University, said the ethics and bias discussion with AI largely depends on whether the system has become indistinguishable from a human. He added:

When it comes to understanding ethics and bias in AI, this is almost tantamount to asking whether the AI system in question is a human (known as the Turing Test, named for Alan Turing, one of the pioneers in computer science and AI, even before it was a discipline). So for a system to express bias would imply more human-like expression, not altogether different from other forms of expression like emotion, etc.

There are other ways, however, where bias could be unintentionally expressed. As many AI systems involve training datasets that may be inherently biased, it is possible for inferences to incorporate actual bias from source materials.

The AI report from the Federal government appears to be focused on the data bias for now.

IEEE's committee launched in April and is starting to build some momentum. After all, this conversation can't get started soon enough across multiple disciplines.

ZDNet Monday Morning Opener

The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet's global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and the US.

Previously on Monday Morning Opener:

Editorial standards