The bet is that low-code developer tools will mean that more technology and business professionals will be able to put together algorithms and they'll need to understand how to do it responsibly.
AI ethics, bias, and transparency has become a hot topic around the technology sector as a bevy of leading vendors tackle the issue. The biggest concern is that individual algorithms by themselves may not have bias, but acquire it when combined with other models.
Kathy Baxter, architect of ethical AI practice at Salesforce, said the Trailhead modules are designed to address employees, customers and partners who will be working with Einstein and using various models.
Aside from educating the Salesforce ecosystem, Trailhead modules may also inform future features to build into Einstein, which already has some AI anti-bias tools built-in with more in pilots. "We're trying to do help customers understand what's in a model, explain what's being done and communicate how it is working," said Baxter.
She added that there will be multiple additions to Trailhead focused on AI ethics and preventing bias.
As access to AI widens and the depth of its impact begins to reveal itself, we are faced with new questions about how to ensure the future of AI is responsible, accountable, and fair — even when it's built by people without technical training or AI expertise. It's critical that anyone building AI consider the domino effect that AI-driven outcomes can have on people and society, intended or not. The first step in building responsible AI solutions is to understand the biases that can lurk within its models and training data, a big challenge in its own right.
The AI module, dubbed Responsible Creation of Artificial Intelligence, on Trailhead will cover:
Ethical and human use of technology;
Recognizing bias in AI;
Removing exclusion from data and algorithms.
What's interesting about Salesforce's module is that the AI crash course is designed to go far beyond data science and into creative pros and line of business managers.
Salesforce's argument is that you can't democratize AI without thinking through the responsibilities associated with it.
The module covers the challenges with bias and making fair decisions and what happens when large datasets scale. Salesforce's module also covers that various flavors of bias such as association, automation, confirmation, societal and interaction bias.