X
Innovation

Artificial intelligence projects grew tenfold over past year, survey says

At issue are talent shortages, integration challenges, and governance requirements, a recent survey of 700 IT managers and executives finds.
Written by Joe McKendrick, Contributing Writer

We know artificial intelligence has been riding the crest of the hype wave, but we don't hear enough about the problems and headaches it brings with it. A new survey estimates the number of complete or nearly complete AI projects has grown tenfold over the past 12 months. Great news, but that means IT teams are scrambling to keep up. Companies need more people with the right skills to put it all together, and executives and managers must ensure that the AI securely delivers what the business needs.

At issue are talent shortages, integration challenges, and governance requirements, a recent survey of 700 IT managers and executives published by Juniper Networks finds. 

The most eyebrow-raising sound bite: Completed or nearly completed AI implementations grew from 6% one year ago to 63% today. In addition, we're currently seeing increased enthusiasm for full AI adoption, versus the narrower use cases that dominated last year's survey. The percentage of IT leaders who say they're looking to deploy fully enabled AI use cases with widespread adoption in the future jumped from 11% to 27%. 

The longstanding build-or-buy conundrum has surfaced with AI projects. Companies are split on implementing off-the-shelf AI solutions compared to ones built in-house. Nearly 4 in 10 executives (39%) indicate that their organizations mix off-the-shelf AI solutions with ones they fully build themselves, with 3 in 10 saying they either use only off-the-shelf or only fully in-house built solutions.

Building AI solutions in-house brings its own sets of challenges. More than half (53%) of IT leaders surveyed say that the reliability of these in-house AI applications is a top challenge, followed by integration with existing systems (46%), finding new AI-capable talent (44%), and development time (44%).

Finding or nurturing the right talent to develop, operate, and leverage AI is an issue challenging many of today's IT leaders. The survey finds three areas as the top investment intentions (21% each): Hiring the right people to operate and develop AI capabilities, further training the AI models, and expanding the capabilities of the current AI tool into new business units.

At least 42% say their existing data and analytics groups are incorporating and taking a lead with AI technology. A similar number have established an AI center of excellence.

Top steps to enable workforce adoption of AI include providing tools and opportunities to apply newly acquired AI skills (43%), updating performance metrics to include AI (40%), developing a workforce plan that identifies new skills and roles (39%), and changing learning and development frameworks (39%).

Another 39% of IT leaders address this challenge by implementing and using AI-enabled low and no-code development tools. About a third, 34%, are adopting AI modeling automation tools.

In 2021, companies faced the AI-related challenges of developing models and standardizing data. In 2022, those challenges remain, but ones related to creating governance policies (35%) and maintaining AI systems (34%) have risen in importance.

With great AI capabilities comes great responsibility. Only 9% of IT leaders consider their AI governance and policies, such as establishing a company-wide AI leader or responsible AI standards and processes, to be fully mature. More leaders see IT governance as a priority: 95% agree that having proper AI governance is key to staying ahead of future legislation, up from 87% in 2021. Almost half of the respondents (48%) think more action needs to happen around effective AI governance.

Close to half, 44%, report they have established AI ethics and Responsible AI standards and processes. The same percentage also established company-wide AI leaders who oversees AI strategy and governance.

IT leaders indicate that the top risks from inadequate oversight of AI are accelerated hacking or what the survey's authors call "AI terrorism" (55%). Privacy also ranks as the top concern, cited by 55%. Regulation compliance (49%) and loss of human agency (48%) were also indicated as top risks.

Editorial standards