X
Innovation

What AI developers need to know about artificial intelligence ethics

Some tools and platforms promise fair and balanced AI, but tools and platforms alone won't deliver ethical AI solutions.
Written by Joe McKendrick, Contributing Writer

If only there were tools that could build ethics into artificial intelligence applications.

Developers and IT teams are under a lot of pressure to build AI capabilities into their company's touchpoints and decision-making systems. At the same time, there is a growing outcry that the AI being delivered is loaded with bias and built-in violations of privacy rights. In other words, it's fertile lawsuit territory. 

There may be some very compelling tools and platforms that promise fair and balanced AI, but tools and platforms alone won't deliver ethical AI solutions, says Reid Blackman, who provides avenues to overcome thorny AI ethics issues in his upcoming book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press). He provides ethics advice to developers working with AI because, in his own words, "tools are efficiently and effectively wielded when their users are equipped with the requisite knowledge, concepts, and training." To that end, Blackman provides some of the insights development and IT teams need to have to deliver ethical AI.

Don't worry about dredging up your Philosophy 101 class notes 

Considering prevailing ethical and moral theories and applying them to AI work "is a terrible way to build ethically sound AI," Blackman says. Instead, work collaboratively with teams on practical approaches. "What matters for the case at hand is what [your team members] think is an ethical risk that needs to be mitigated and then you can get to work collaboratively identifying and executing on risk-mitigation strategies."

Don't obsess about "harm" 

It's reasonable to be concerned about the harm AI may unintentionally bring to customers or employees, but ethical thinking must be broader. The proper context, Blackman believes, is to think in terms of avoiding the "wronging" of people. This includes "what's ethically permissible, what rights might be violated, and what obligations may be faulted on."

Bring in an ethicist 

They are "able to spot ethical problems much faster than designers, engineers, and data scientists -- just as the latter can spot bad design, faulty engineering, and flawed mathematical analyses."

Consider the five ethical issues in what is proposed to be created or procured

These consist of 1) what you create, 2) how you create it, 3) what people do with it, 4) what impacts it has, and 5) what to do about these impacts. 

AI products "are a bit like circus tigers," Blackman says. "You raise them like they're own, you train them carefully, they perform beautifully in show after show after show, and then one day they bite your head off." The ability to tame AI depends on "how we trained it, how it behaves in the wild, how we continue to train it with more data, and how it interacts with the various environments it's embedded in." But changing variables -- such as pandemics or political environments -- "can make AI ethically riskier than it was on the day you deployed it."

Editorial standards