Salesforce adds AI bias module to Trailhead

The idea behind Salesforce's AI bias education effort is that far more than data scientists will ultimately be creating algorithms and modules.
Written by Larry Dignan, Contributor

Salesforce is creating modules on its Trailhead developer education platform for responsible artificial intelligence.

The bet is that low-code developer tools will mean that more technology and business professionals will be able to put together algorithms and they'll need to understand how to do it responsibly.

AI ethics, bias, and transparency has become a hot topic around the technology sector as a bevy of leading vendors tackle the issue. The biggest concern is that individual algorithms by themselves may not have bias, but acquire it when combined with other models. 

Kathy Baxter, architect of ethical AI practice at Salesforce, said the Trailhead modules are designed to address employees, customers and partners who will be working with Einstein and using various models.

Aside from educating the Salesforce ecosystem, Trailhead modules may also inform future features to build into Einstein, which already has some AI anti-bias tools built-in with more in pilots. "We're trying to do help customers understand what's in a model, explain what's being done and communicate how it is working," said Baxter.

She added that there will be multiple additions to Trailhead focused on AI ethics and preventing bias.

Bias is a critical concept in AI and some academics have called for more self-governance and regulation. In addition, industry players such as IBM have pushed for more transparency and a layer of software to monitor algorithms to see how they work together to produce bias. Meanwhile, enterprises are striving for explainable AI. Google says it will address AI, machine learning model bias with technology called TCAV.

In a blog post, Salesforce's Baxter noted:

As access to AI widens and the depth of its impact begins to reveal itself, we are faced with new questions about how to ensure the future of AI is responsible, accountable, and fair — even when it's built by people without technical training or AI expertise. It's critical that anyone building AI consider the domino effect that AI-driven outcomes can have on people and society, intended or not. The first step in building responsible AI solutions is to understand the biases that can lurk within its models and training data, a big challenge in its own right.

The AI module, dubbed Responsible Creation of Artificial Intelligence, on Trailhead will cover:

  • Ethical and human use of technology;
  • Understanding AI;
  • Recognizing bias in AI;
  • Removing exclusion from data and algorithms.

What's interesting about Salesforce's module is that the AI crash course is designed to go far beyond data science and into creative pros and line of business managers.

Primers: What is AI? | What is machine learning? | What is deep learning? | What is artificial general intelligence?      

Salesforce's argument is that you can't democratize AI without thinking through the responsibilities associated with it.

The module covers the challenges with bias and making fair decisions and what happens when large datasets scale. Salesforce's module also covers that various flavors of bias such as association, automation, confirmation, societal and interaction bias.

Here's a simple example of survival bias:


More AI bias thought leadership:

Related stories:

Editorial standards