X
Innovation

CIO Jury: 83% of tech leaders have no policy for ethically using AI

Artificial intelligence is something more organizations are using, but only a few have policies in place to deal with the ethics of bias and governance.
Written by Teena Maddox, Contributor

Ethical questions around artificial intelligence (AI) have become part of the conversation in business as more organizations add machine learning (ML) to their toolkit. However, only a few have policies in place to make sure that AI is used ethically, according to a TechRepublic CIO Jury poll.  

When asked, "Does your company have a policy for ethically using AI or machine learning?" 10 out of 12 tech leaders said no, while just two said yes. That means only 17% of tech leaders in TechRepublic's informal poll have an ethics policy in place for AI and ML. 

Exactly 12 months ago, TechRepublic asked the same question to its CIO Jury, and at that point, only one tech leader had a policy in place for ethically using AI or machine learning. So, adoption is growing, but slowly.

Those weighing in on the "yes" side include Michael Ringman, CIO of TELUS International. Ringman said, "Technology like AI and machine learning play a key role in TELUS International's ability to deliver seamless, consistent CX wherever and whenever our clients' customers want it. But core to our people-first culture is a belief that AI and machine learning can be leveraged to enhance not replace the capabilities of our frontline team members. Yes, it's ethical, but it also makes good business sense. Customers increasingly crave effortless, anticipatory, personalized experiences, and AI can enhance that when used as part of a BI strategy to provide a 360-degree view of the customer."

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)  

Clément Stenac, CTO of Dataiku, said having an ethics policy is essential. "In fact, our platform (Dataiku) democratizes access to data and enables enterprises to build their own path to AI in a human-centric way. We promote ethical AI for customers by making data easy to understand for all teams, despite technical understandings, across organizations. The ethics of our company are defined based on this idea of human-centric AI. People who develop and deploy models must be aware of its potential shortcomings and bear the responsibility for its faults. At Dataiku, we offer extensive training on this subject to our employees. It's critical for the creators of AI to recognize the importance of successfully empowering teams both with training and tools to build ethical AI algorithms."

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
The other ten jury members all voted "no" but some were on the fence, including Steven Page, vice president of IT for marketing and digital banking for Safe America. Page said at this time his company doesn't have an ethics policy, but "we are watching the trend in the use of AI."

At Multiquip, Michael Hanken, vice president of IT, said there is no policy in place, but he's considering it for a pilot project that has recently launched.

At Ntirety, CEO Emil Sayegh said, "We do not have an ethical use policy on AI or machine learning yet, but we definitely should consider a company policy to protect our customer's IT infrastructures as we are an early user of both machine learning and AI. A 'do no harm' and privacy boundaries of our customers' behavioral patterns should be in place. This is complicated by the fact that our customers count on us to parse legitimate from nefarious traffic. AI and machine learning are so powerful that it could uncover trends in legitimate traffic that could violate confidential usage patterns from our customers. Furthermore, access to the usage and behavioral data trends uncovered by AI and machine learning could legally or illegally fall or be shared with state or governmental institutions further compromising privacy."

And other jury participants don't have one and there aren't any plans to put one in place, as with Eric Shashoua, founder and CEO of Kiwi for GSuite. Shashoua said his company doesn't have an ethics policy in place: "While we don't use AI or machine learning directly at present, ethics in our context would relate to our users and their data. Since we're a company with a product built for communication, we have a strict policy of not collecting user data, and being in strict agreement with GDPR and even the spirit of that law."

Here are this month's CIO Jury participants: 

John Gracyalny, vice president of digital member services, Coast Central Credit Union
Craig Lurey, CTO and co-founder, Keeper Security
Michael Hanken, vice president of IT, Multiquip
Dan Gallivan, director of information technology, Payette
Emil Sayegh, CEO, Ntirety
Kris Seeburn, independent IT consultant, evangelist, and researcher
Michael Ringman, CIO, TELUS International
Clément Stenac, CTO, Dataiku
Michael R. Belote, CTO, Mercer University
Steven Page, vice president of IT for marketing and digital banking for Safe America
Eric Shashoua, founder and CEO of Kiwi for GSuite
Joel Robertson, chief information officer, King University

Want to be part of TechRepublic's CIO Jury and have your say on the top issues for IT decision makers? If you are a CIO, CTO, IT director, or equivalent at a large or small company, working in the private sector or in government, and you want to join TechRepublic's CIO Jury pool, email teena dot maddox at cbsinteractive dot com, and send your name, title, company, location, and email address.
 

 

 

Editorial standards