IBM on Monday announced it's donating a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation. As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy.
"Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation," IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic.
- SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)
Specifically, IBM is contributing the AI Fairness 360 Toolkit, the Adversarial Robustness 360 Toolbox and the AI Explainability 360 Toolkit. The AI Fairness 360 Toolkit allows developers and data scientists to detect and mitigate unwanted bias in machine learning models and datasets. Along with other resources, it provides around 70 metrics to test for biases and 11 algorithms to mitigate bias in datasets and models. The Adversarial Robustness 360 Toolbox is an open-source library that helps researchers and developers defend deep neural networks from adversarial attacks. Meanwhile, the AI Explainability 360 Toolkit provides a set of algorithms, code, guides, tutorials, and demos to support the interpretability and explainability of machine learning models.
The LFAI's Technical Advisory Committee voted earlier this month to host and incubate the project, and IBM is currently working with them to formally move them under the foundation.
As a Linux Foundation project, the LFAI provides a vendor-neutral space for the promotion of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open-source projects. It's backed by major organizations like AT&T, Baidu, Ericsson, Nokia, Tencent and Huawei.
IBM joined the LFAI last year and helped established its Trusted AI Committee, which is working towards defining and implementing principles of trust in AI deployments.
In the announcement of its LFAI contribution, IBM noted that "technology is only one part of the equation" when it comes to building trustworthy and equitable AI. "On this mission, contributions from social science, policy and legislation, and diverse perspectives play equally important role as the technology itself," the blog post said.
Governments around the world are addressing the issue. For instance, the European Commission in February released a white paper on the regulation of AI. Meanwhile, Canada, France, Australia, Germany, India, Italy, Japan, Mexico, New Zealand, Korea, Singapore, Slovenia, the UK, the US, and the European Union are forming the Global Partnership on Artificial Intelligence (GPAI).
As part of its broader efforts to encourage responsible technology deployments, IBM also on Monday announced the latest grant recipient of its Open Source Community Grant. The grant aims to help create new tech opportunities for underrepresented communities. The latest grant is going to the Colombia-based PionerasDev, a nonprofit that helps women and girls in Colombia learn how to code.
The group has grown from five members to more than 1,200 in three years, with more than 80 percent of its membership coming from lower income areas. The grant includes a cash award of $25,000 and a technology award valued at $25,000 to directly support education and career development activities.