CIO Jury: 92 percent of tech leaders have no policy for ethically using AI

More organizations are exploring artificial intelligence and machine learning tools, but few have any policies around ethical issues like bias and governance.
Written by Alison DeNisco Rayome, Managing Editor

As more organizations explore artificial intelligence (AI) and machine learning tools, some are beginning to grapple with ethical questions that may arise around bias, interpretability, robustness, security, and governance. However, very few have policies in place to ensure that AI is used ethically, according to a TechRepublic CIO Jury poll.

When asked, "Does your company have a policy for ethically using AI or machine learning?" 11 out of 12 tech leaders said no, while just one said yes.

However, most of the 'Nos' are not expected to stay that way for long.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

"'Not yet' would be more accurate long-term," said John C. Gracyalny, vice president of digital member services at Coast Central Credit Union.

Dan Gallivan, director of information technology at Payette, agreed. "Something tells me we should be adding it to our current IT policies!" he said.

For some, including Michael Hanken, vice president of IT, Multiquip Inc., it's simply "too early in the game," but policies will likely come in the future.

While Power Home Remodeling does not have an ethical AI policy, it does have policies for the ethical use of technology in general, which extends to AI, said CIO Timothy Wenhold.

"That said, after embarking on multiple machine learning [projects], I believe that there will be a need for companies to monitor the application of AI within their organizations and processes," Wenhold said. "Through this practice we will gain the necessary knowledge to craft updated policies that will allow our organizations to govern the use of AI and remain good corporate citizens."

The topic will continue to be relevant in the coming years, said Greg Carter, CTO of GlobalTranz.

"In logistics, AI and machine learning are becoming increasingly important to how logistics services providers manage the flow of goods and materials through the supply chain," Carter said. "For example, one area we are exploring now is using AI to model the behavior of specific elements of the supply chain -- including drivers. We are essentially creating a digital persona of drivers in an effort to understand their preferred routes and load types. This will allow us to book the ideal driver for multiple loads in advance. Knowing this much about a driver and targeting using AI requires a governance, policy, and security framework to make sure this information is not misused."

AI and its related technologies are already impacting how users interact with the internet, said Kris Seeburn, an independent IT consultant, evangelist, and researcher.

"AI for us brings greatly the potential to vastly change the way that humans/stakeholders and staff interact, not only with the digital world, but also with each other, through their work and through other socioeconomic institutions -- for better or for worse," Seeburn said. "We want to ensure that the impact of artificial intelligence will be in a positive way, and that we do recognize the essentials that all stakeholders participate in the use and adoption surrounding AI and machine learning principles."

Organizations should implement policies for ethically using AI and machine learning in the near future, because the long-term effects of not doing so could be damaging for the business, said Christopher Hazard, CTO of Diveplane, who was not a member of the CIO Jury.

"Implementing AI without interpretability can lead to loss of tacit knowledge in the organization, leaving the business unable to adapt to changing circumstances due to the inability to understand those circumstances," Hazard said. "Lack of AI robustness can exacerbate the inability to adapt, as well as potentially lead to the business being vulnerable to exploitations by customers, employees, or competitors. The company should also document the trade-offs they are willing and prepared to make with regard to removing bias from their AI deployments to ensure that bias is properly prioritized and addressed throughout the organization."

This month's CIO Jury included:

Lance Taylor-Warren, CIO, Community Health Alliance
Michael Hanken, vice president of IT, Multiquip Inc.
John C. Gracyalny, vice president of digital member services, Coast Central Credit Union
Dan Gallivan, director of information technology, Payette
Timothy Wenhold, CIO, Power Home Remodeling
Kris Seeburn, independent IT consultant, evangelist, and researcher
Joel Robertson, CIO, King University
Jeff Focke, director of IT, Shealy Electrical Wholesalers
Jeff Kopp, technology director, Christ the King Catholic School
Eric Carrasquilla, senior vice president of product, Apttus
David Wilson, director of IT services, VectorCSP
Greg Carter, CTO, GlobalTranz

Want to be part of TechRepublic's CIO Jury and have your say on the top issues for IT decision makers? If you are a CIO, CTO, IT director, or equivalent at a large or small company, working in the private sector or in government, and you want to join TechRepublic's CIO Jury pool, email alison dot rayome at cbsinteractive dot com, and send your name, title, company, location, and email address.

Also see

Machine learning: A cheat sheet (TechRepublic)

Artificial intelligence: A business leader's guide (TechRepublic download)

IT leader's guide to deep learning (Tech Pro Research)

What is AI? Everything you need to know about Artificial Intelligence (ZDNet)

6 ways to delete yourself from the internet (CNET)

Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)

Editorial standards