That's the conclusion from a recent survey of 1,240 executives published by MIT Sloan Management Review and Boston Consulting Group (MIT SMR and BCG), which looked at the progress of responsible AI initiatives, and the adoption of both internally built and externally sourced AI tools - what the researchers call "shadow AI".
The promise of AI comes with consequences, suggest the study's authors, Elizabeth Renieris (Oxford's Institute for Ethics in AI), David Kiron (MIT SMR), and Steven Mills (BCG): "For instance, generative AI has proven unwieldy, posing unpredictable risks to organizations unprepared for its wide range of use cases."
Many companies "were caught off guard by the spread of shadow AI use across the enterprise," Renieris and her co-authors observe. What's more, the rapid pace of AI advancements "is making it harder to use AI responsibly and is putting pressure on responsible AI programs to keep up."
They warn the risks that come from ever-rising shadow AI are increasing, too. For example, companies' growing dependence on a burgeoning supply of third-party AI tools, along with the rapid adoption of generative AI - algorithms (such as ChatGPT, Dall-E 2, and Midjourney) that use training data to generate realistic or seemingly factual text, images, or audio - exposes them to new commercial, legal, and reputational risks that are difficult to track.
The researchers refer to the importance of responsible AI, which they define as "a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact."
Another difficulty stems from the fact that a number of companies "appear to be scaling back internal resources devoted to responsible AI as part of a broader trend in industry layoffs," the researchers caution. "These reductions in responsible AI investments are happening, arguably, when they are most needed."
For example, widespread employee use of the ChatGPT chatbot has caught many organizations by surprise, and could have security implications. The researchers say responsible AI frameworks have not been written to "deal with the sudden, unimaginable number of risks that generative AI tools are introducing".
The research suggests 78% of organizations report accessing, buying, licensing, or otherwise using third-party AI tools, including commercial APIs, pretrained models, and data. More than half (53%) rely exclusively on third-party AI tools and have no internally designed or developed AI technologies of their own.
Responsible AI programs "should cover both internally built and third-party AI tools," Renieris and her co-authors urge. "The same ethical principles must apply, no matter where the AI system comes from. Ultimately, if something were to go wrong, it wouldn't matter to the person being negatively affected if the tool was built or bought."
The co-authors caution that while "there is no silver bullet for mitigating third-party AI risks, or any type of AI risk for that matter," they urge a multi-prong approach to ensuring responsible AI in today's wide-open environment.