X
Innovation

IBM launches tools to detect AI fairness, bias and open sources some code

Bias detection and black box transparency is limiting AI deployments at scale, argues IBM.
Written by Larry Dignan, Contributor

IBM said it will launch cloud software designed to manage artificial intelligence deployments, detect bias in models and mitigate its impact and monitor decision across multiple frameworks.

The move by IBM highlights how AI management is becoming more of an issue as companies deploy machine learning and various models to make decisions. Executives are likely to have trouble understanding models and the data science under the hood.

Also: Brewers are using AI to predict how your next beer will taste CNET

IBM said its technology will monitor AI so enterprises comply with regulations. In addition, IBM's software works with models build on machine learning frameworks such as Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.

Meanwhile, IBM said it will open source IBM Research's bias detection tools via what it calls its AI Fairness 360 toolkit. The toolkit will provide a library of novel algorithms, code and tutorials. The hope is that academics, researchers and data scientists will integrate bias detection into their models. IBM's AI bias detection tools are on Github.

Ritika Gunnar, vice president of IBM Watson Data and AI, said in an interview that the lack of trust and transparency with AI models are holding back enterprise deployments at scale and in production. Simply put, models are still on the shelf due to concerns about how the real-time decision making can harm a business. "It's a real problem and trust is one of the most important things preventing AI at scale in production environments," she said.

Also: Free PDF download: Data, AI, IoT: The future of retail

Strategically, IBM's move makes sense. IBM is hoping to provide Watson AI, but also manage AI and machine learning deployments overall. It's just a matter of time before AI Management becomes an acronym among technology vendors. IBM said it is planning to provide explanations that show how factors were weighted, confidence in recommendations, accuracy, performance, fairness and lineage of AI systems.

There is little transparency in the models being sold, inherent bias, or fine print. IBM Research recently proposed an effort to add the equivalent of a UL rating to AI services.

IBM said it will also offer services for enterprises looking to better manage AI and avoid black box thinking.

Big Blue's research unit recently penned a white paper outlining its take on AI bias and how to prevent it. IBM's Institute for Business Value found that 82 percent of enterprises are considering AI deployments, but 60 percent fear liability issues.

Also: 10 ways AI will impact the enterprise in 2018 TechRepublic

Gunnar noted that AI bias goes well beyond the factors such as gender and race. One scenario of AI bias could revolve around a claims insurance process and an adjuster making a decision to approve or reject a claim. Items such as how long a policy was held, value of a vehicle, age and zip code could play into what Gunnar called "non-societal bias."

IBM's Fairness 360 open source tools include tutorials on AI bias on credit scoring, medical expenses and gender bias in facial images.

36 of the best movies about AI, ranked

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:

Editorial standards