While artificial intelligence-based initiatives are delivering positive results, executives are still taking a cautious approach to the technology. One area of concern is the ethics and potential business value of AI -- seven in 10 even have special ethics training for IT professionals working with the technology.
Ethics, confidence and business value in AI were front and center in a recent survey of 305 enterprises, conducted by Forbes Insights in partnership with SAS, Accenture and Intel. When looking at only those who have reported having deployed AI, 51 percent say the impact of deployment of AI-based technologies on their operations has been"successful" or "highly successful. (I helped design and implement the survey as part of my work as an independent research analyst.)
The survey report defines AI as "the science of training systems to emulate human tasks through learning and automation." AI adopters indicated relatively strong ethical processes in place today, with 63 percent affirming that they "have an ethics committee that reviews the use of AI," and 70 percent indicating they "conduct ethics training for their technologists." Among companies that report that they have achieved real success from their deployment of AI, 92 percent say they conduct ethics training for their technologists, compared with 48 percent of those that have not achieved real success yet from deploying AI.
Still, as Thomas Davenport put it in the current issue of MIT Sloan Management Review, there is a greater need for greater transparency with AI. He advises AI implementers to tread cautiously: "Whether the use of cognitive technologies is internal or external, it's best to under-promise and over-deliver," he says.
The key is to roll AI out gently and openly. "Introduce new capabilities as beta offerings and communicate the goal of learning about the use of the technology," Davenport continues. "Don't eliminate alternative -- usually human -- approaches to solving employees' or customers' problems. Over time, as the technology matures and the AI solution improves its capabilities, both the machine and the communications describing its functions can become more confident."
Davenport even raises the specter of certification of AI algorithms by trusted third parties, just as "the FDA certifies drug efficacy, auditors certify financial processes, and the Underwriters Laboratory certifies the safety of products." It's time to independently certify "the reliability, replicability, and accuracy of AI algorithms," he urges.
Ultimately, the success of AI is tied to trust -- by internal and external stakeholders. Overall, the results of the Forbes Insights-SAS-Accenture-Intel survey suggests "that at minimum, business and government leaders do. But many are rightfully concerned about their employees' lack of trust in AI, manifested in their concerns about its impact on their jobs. And there are even signs that some of these leaders harbor pockets of doubt about the extent to which they can trust AI outputs. Trust will play perhaps a larger role in the evolution of AI than it has for any technology in recent memory."
Achieving the necessary trust "will require education, transparency, clear ethical guidelines - and patience."