X
Innovation

80% of enterprises will have incorporated AI by 2026, according to a Gartner report

Think AI has a lot of hype now? It's going to accelerate in the next two years -- especially in the enterprise.
Written by Sabrina Ortiz, Editor
Time concept
Sean Gladwell/Getty Images

Since the release of ChatGPT almost a year ago, generative AI has been on the rise, with companies consistently developing or adopting AI models every day. A new report by Gartner shows that the growth will only keep trending upward in the years ahead. 

The research firm predicts that 80% of enterprises will have used generative AI APIs (application programming interfaces) or models, or developed their own by 2026. 

Also: Ransomware victims continue to pay up, while also bracing for AI-enhanced attacks

That means that in only three years, the number of businesses that adopt or create generative AI models will have grown sixteenfold since, according to Gartner data, less than 5% of enterprises have done so in 2023. 

"Generative AI has become a top priority for the C-suite and has sparked tremendous innovation in new tools beyond foundation models," said Arun Chandrasekaran, distinguished VP analyst at Gartner. 

The research firm delineated some of the innovations that are projected to have massive impacts on organizations over the next ten years, including generative AI-enabled applications, foundation models, and AI, trust, risk, and security management (AI TRiSM).

Generative AI-enabled applications simply refer to applications that leverage generative AI to complete a specific task. ChatGPT would be an example of generative AI-enabled applications, as it uses AI to synthesize your text prompts and output a response. 

Organizations can adopt these applications to facilitate the work internally for workers or to offer experiences for customers that improve their services and the customer's experience.

"The most common pattern for GenAI-embedded capabilities today is text-to-X, which democratizes access for workers, to what used to be specialized tasks, via prompt engineering using natural language," said Chandrasekaran in the report. 

Also: How Adobe is leveraging generative AI in customer experience upgrades

A prime example is the growing number of consulting firms that are either adopting or developing their own AI models to make it easier for clients to find the resources they need from the firm's vast databases.

A challenge with these applications is that they are prone to hallucinations and inaccurate responses that make their reliability questionable. 

Foundation models refer to the machine learning models that underly generative AI applications, for example, what GPT is to ChatGPT. 

These foundation models are trained on large amounts of data and are used to power different applications that can complete a wide variety of tasks. 

Gartner placed foundation models at the Peak of Inflated Expectations on the Hype Cycle, predicting that by 2027, they will underpin 60% of natural language processing (NLP) use cases. 

Hype Cycle for Generative AI, 2023
Gartner

"Technology leaders should start with models with high accuracy in performance leaderboards, ones that have superior ecosystem support and have adequate enterprise guardrails around security and privacy," said Chandrasekaran.

Also: Firefox is getting an AI-powered fake review detector for your shopping needs

Lastly, AI TRiSM refers to the set of solutions that can address the issues that surround generative AI models and ensure their successful deployment. 

Some risks that plague generative AI models are reliability, misinformation, bias, privacy, and fairness. 

If left unaddressed properly, these issues can be particularly hurtful for organizations since they risk the leaking of sensitive data and the spread of misinformation throughout an organization. 

"Organizations that do not consistently manage AI risks are exponentially inclined to experience adverse outcomes, such as project failures and breaches," said Chandrasekaran.

AI TRiSM is therefore crucial for organizations to minimize those risks and protect the members of their organization.

Editorial standards