Artificial intelligence: Everyone wants it, but not everyone is ready

Business is bullish on AI, but it takes a well-developed understanding to deliver visible business benefits.

Artificial intelligence technologies have reached impressive levels of adoption, and are seen as a competitive differentiator. But there comes a point when technology becomes so ubiquitous that it is no longer a competitive differentiator -- think of the cloud. Going forward, those organizations succeeding with AI, then, will be those that apply human innovation and business sense to their AI foundations. 

read this

Everything you need to know about AI

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

Read More

Such is the challenge identified in a study released by RELX, which finds the use of AI technologies, at least in the United States, has reached 81% of enterprises, up 33 percentage points from 48% since a previous RELX survey in 2018. They're also bullish on AI delivering the goods -- 93% report that AI makes their business more competitive. This ubiquity may be the reason 95% are also reporting that finding the skills to build out their AI systems is a challenge. Plus, these systems could be potentially flawed: 75% worry that AI systems may potentially introduce the risk of bias in the workplace, and 65% admit their systems are biased.  

So there's still much work to be done.  It comes down to the people that can make AI happen, and make it as fair and accurate as possible. 

"While many AI and machine learning deployments fail, in most cases, it's less of a problem with the actual technology and more about the environment around it," says Harish Doddi, CEO of Datatron. Moving to AI "requires the right skills, resources, and systems." 

It takes a well-developed understanding of AI and ML to deliver visible benefits to the business. While AI and ML have been around for many years, "we are still barely scratching the surface of uncovering their true capabilities," says Usman Shuja, general manager of connected buildings for Honeywell. "That said, there are many valuable lessons to be gleaned from others' missteps. While it's arguably true that AI can add significant value to practically any department across any business, one of the biggest mistakes a business can make is to implement AI for the sake of implementing AI, without a clear understanding of the business value they hope to achieve."

In addition, AI requires adroit change management, Shuja continues. "You can install the most cutting-edge AI solutions available, but if your employees can't or won't change their behaviors to adapt to a new way of doing things, you will see no value."

Another challenge is bias, as expressed by many executives in the RELX survey. "Algorithms can easily become biased based on the people who write them and the data they are providing, and bias can happen more with ML as it can be built in the base code," says Shuja. "While large amounts of data can ensure accuracy, it's virtually impossible to have enough data to mimic real-world use cases."

For example, he illustrates, "if I was looking into recruiting collegiate athletes for my professional lacrosse team, and I discovered that most of the players I am hearing about are Texas Longhorns, that might lead me to conclude that the best lacrosse players attend the University of Texas. However, this could just be because the algorithm has received too much data from one university, thus creating a bias."

The way the data is set up and who sets it up "can inadvertently sneak bias into the algorithms," Shuja says. "Companies that are not yet thinking through these implications need to put this to the forefront of their AI and ML technology efforts to build integrity into their solutions."

Another issue is that AI and ML models simply become outdated too soon, as many companies found out, and continue to find out as a result of Covid and supply chain issues. "Having good documentation that shows the model lifecycle helps, but it's still insufficient when models become unreliable," says Doddi, "AI model governance helps bring accountability and traceability to machine learning models by having practitioners ask questions such as 'What were the previous versions like?' and 'What input variables are coming into the model?''"   
  
Governance is key. During development, Doddi explains, "ML models are bound by certain assumptions, rules, and expectations. Once deployed into production, the results can differ significantly from results in development environments.  This is where governance is critical once a model is operationalized.  There needs to be a way to keep track of various models and versions."

In some cases with AI, "less is more," says Shuja. "AI tends to be most successful when it is paired with mature, well-formatted data. This is mostly within the realm of IT/enterprise data, such as CRM, ERP, and marketing. However, when we move into areas where the data is less cohesive, such as with operational technology data, this is where achieving AI success becomes a bit more challenging. There is a tremendous need for scalable AI within an industrial environment, for example using AI to reduce energy consumption in a building or industrial plant -- an area of great potential for AI. One day soon, entire businesses -- from the factory floor to the board room -- will be connected; constantly learning and improving from the data it is processing. This will be the next major milestone for AI in the enterprise."