IBM is rolling out recommended Watson OpenScale bias monitors for artificial intelligence and machine learning models.
These recommended bias monitors automatically identify attributes like sex, ethnicity, marital status and age and recommend they be monitored. By flagging attributes up front, IBM is removing the need for manual selection of attributes to monitor.
Watson OpenScale's recommended bias monitors can be edited by users, according to an IBM blog.
The company has been building out its Watson suite of products to run on multiple clouds, make data preparation easier and scale algorithms in enterprises. Bias has become a key issue for companies as algorithms scale. One issue is that while an individual model may not have bias problems can develop when algorithms are combined. Large technology vendors are starting to address algorithmic bias via automation, software and education.
IBM added that is working with Promotory to expand its list of attributes to cover to address regulations. IBM is trying to get ahead of algorithm bias since it is likely to be regulated more in the future. Companies like IBM have led the charge on AI governance, job impact, transparency and bias.
- Salesforce adds AI bias module to Trailhead
- Recognizing the need to check for bias in algorithms
- Big data bias: Making metrics more science and less alchemy
- Google says it will address AI, machine learning model bias with technology called TCAV
- How College Board's Environmental Context Dashboard highlights algorithm transparency vs. explainability issue
- CIO Jury: 92 percent of tech leaders have no policy for ethically using AI