X
Innovation

Companies aren't spending big on AI. Here's why that cautious approach makes sense

Everyone everywhere is talking about AI, but new research suggests it's going to be a while before large-scale AI projects move into production.
Written by Mark Samuels, Contributor
Piggy banks in a line
roberthyrons/Getty Images

Generative AI has captured the general public's attention, but that sense of excitement doesn't mean executives believe it's ready to be deployed in the business

Just one in ten technology leaders globally report having large-scale implementations of AI, according to Nash Squared's annual Digital Leadership Report, which is the world's largest and longest-running annual survey of technology chiefs.

What's more, the hype surrounding generative AI has done little to encourage further investment in artificial intelligence -- Nash Squared reports the one in ten proportion who spend big on AI hasn't changed for five years.

Also: 4 ways to detect generative AI hype from reality

From the outside looking in, it seems the bark of AI is a lot louder than the bite. While everyone's talking about generative AI and machine learning, very few companies are investing in large-scale AI implementations.

However, Bev White, CEO of digital transformation and recruitment specialist Nash Squared, says in an interview with ZDNET that it's important to place these headline figures in context.

Yes, few businesses are spending big on AI right now, but lots of organizations are starting to investigate emerging technology.

"What we are seeing is actually quite an uptake," says White, who says interest in AI is at the research rather than the production stage.

Around half of companies (49%) are piloting or conducting a small-scale implementation of AI, and a third are exploring generative AI.

Also: AI at the edge: Fast times ahead for 5G and the Internet of Things

"And that's exactly what we saw when cloud started to really take off," says White, comparing the rise of AI to the initial move to the cloud over a decade ago.

"It was, 'let's dip our toe in the water, let's understand what all the implications are for policies, for data, for privacy, and for training,'" she says.

"Businesses were creating their own use cases by doing small but meaningful pilots. That's what happened last time, and I'm not surprised that's what's happening this time." 

In fact, White says the hesitancy to spend big on AI makes a lot of sense for two key reasons.

First, cash is tight in many organizations due to heavy investment in IT during and immediately after the COVID-19 pandemic. 

"Digital leaders are trying to balance the books -- they're thinking 'what's going to give me the greatest return for investment right now,'" she says. 

"Small, careful, well-planned pilots -- while you're still doing some of the punchier digital transformation projects -- will make a big difference to your organization." 

Also: As developers learn the ins and outs of generative AI, non-developers will follow

Second, a lot of emerging technology -- particularly generative AI -- remains at a nascent stage of development. Each new iteration of a well-known large language model, such as OpenAI's ChatGPT, brings new developments and opportunities, but also risks, says White.

"You're accountable as a CIO or CTO of a big enterprise. You want to be sure about what you're doing with AI," she says. "There's such a big risk here that you need to think about your exposure -- what do you need to protect the people that work for your business? What policies do you want to have?" 

White talks about the importance of AI security and privacy, particularly when it comes to the potential for staff to train models using data that's owned by someone else, which could open the door to litigation.

"There's a big risk that people can cut and paste," she says. "I'm not saying generative AI isn't good. I'm really a fan. But I am saying that you've got to be very consciously aware of the sources of data and the decisions you make off the back of that information."

Also: Organizations are fighting for the ethical adoption of AI. Here's how you can help

Given these concerns about emerging technology, it might seem strange that Nash Squared reports that only 15% of digital leaders feel prepared for the demands of generative AI.

However, White says this lack of preparedness is understandable given the lack of clarity around both how to implement AI safely and securely today, and the potential for sudden changes in direction in the not-so-distant future.

"If you're accountable for the security, safety, and the reputation of using this technology inside your business, you'd better make sure you've thought everything through, and also that you take your board with you and educate them along the way," she says.

"A lot of chief executives know that they've got to have AI somewhere in their mix, because it's going to provide a competitive advantage, but they don't know where yet. It's a discovery phase, really."

White says the focus on exploration and investigation also helps to explain why just 21% of global organizations have an AI policy in place, and more than a third (36%) have no plans to create such a policy. 

Also: The ethics of generative AI: How we can harness this powerful technology

"How many innovative projects do you know that started with people thinking about potential gates and failure points?" She says.

"Mostly you start with, 'Wow, where could I go with this?' And then you figure out what gates you need to close around you to keep your project and data safe and contained." 

However, while professionals want to live a little when it comes to exploring the opportunities of AI, the research -- which surveyed more than 2,000 digital leaders globally -- suggests CIOs aren't oblivious to the need for strong governance in this fast-moving area.

In most cases, digital leaders are looking for regulations to help their organizations investigate AI safely and securely.

Yet they're also unconvinced that rules for AI from industry or government bodies will be effective.

While 88% of digital leaders believe heavier AI regulation is essential, as many as 61% say tighter regulation won't solve all the issues and risks that come with emerging technology. 

Also: Worried about AI gobbling up your job? Start doing these 3 things now

"You'll always need a straw man to push back at. And it's good to have guidance from industry bodies and from governments that you can push your own thinking up against," says White. "But you won't necessarily like it. If it's carried through and put into law, then suddenly you've got to adhere to it and find a way of keeping within those guidelines. So, regulation can be a blessing and a curse." 

Even if regulations are slow to emerge in the fast-moving area of AI, White says that's no excuse for complacency for the companies who are looking to investigate the technology.

Digital leaders, particularly security chiefs, should be thinking right now about their own guardrails for the use of AI within the enterprise.

And that's something that's happening within her own organization.

Also: Your AI experiments will fail if you don't focus on this special ingredient

"Our CISO has been thinking about generative AI and how it can be a real gift to cyber criminals. It can open doors innocently to important, big chunks of data. It could mean access to your secret sauce. You have to weigh up the risks alongside the benefits," she says.

With that balance in mind, White issues a word of warning to professionals -- get ready for some high-profile AI incidents. 

Just as a cybersecurity incident that affects a few people can help to show the risks to many others, AI incidents -- such as data leaks, hallucinations, and litigations -- will cause senior professionals to pause and reflect as they explore emerging technology.

"As leaders, we need to be concerned, but we also need to be curious. We need to lean in and get involved, so that we can see the opportunities that are out there," she says. 

Editorial standards