X
Innovation

AI is coming to a business near you. But let's sort these problems first

ChatGPT-style AI tools that are limited to enterprise information are being considered. Here's what that technology could mean for you and your organization.
Written by Joe McKendrick, Contributing Writer
woman using laptop
Getty Images/Qi Yang

While most of you will be familiar with ChatGPT, which is a generative artificial intelligence (AI) tool built on a large language model (LLM) that provides relatively intelligent responses to questions, few of you will be using it at work. ChatGPT is usually not considered safe for serious business endeavors and is mainly used for tinkering at this point. 

Now, efforts are underway to package language models into enterprise environments, focused on resident enterprise data. But at the same time, AI practitioners and experts are urging caution with the development of AIs and LLMs. 

Also: How to use ChatGPT: Everything you need to know

These are the findings from a survey of 300 AI partitioners and experts released by expert.ai. "Enterprise-specific language models are the future," the report's authors state. "Business and technical executives are being asked by their boards and increasingly by shareholders how they plan to leverage this new dawn of AI and the promise it provides to unlock language to solve problems." 

The research suggests more than one-third (37%) of enterprises are already considering building enterprise-specific language models.

At the same time, AI practitioners recognize that building and maintaining a language model is a non-trivial task. A majority of enterprises (79%) realize that the effort required to train a usable and accurate enterprise-specific language model is "a major undertaking". 

Also: The best AI chatbots: ChatGPT and alternatives to try

Nevertheless, efforts are underway -- teams are already budgeting for LLM adoption and training projects, with 17% having budget this year, another 18% planning to allocate budget, and 40% discussing budgeting for next year.

"This makes sense, as most of the public domain data used to train LLMs like ChatGPT is not enterprise-grade or domain-specific data," the expert.ai authors state. "Even if a language model has been trained on different domains, it is not likely representative of what is used in most complex enterprise use cases, whether vertical domains like financial services, insurance, life sciences and healthcare, or highly specific use cases like contract review, medical claims, risk assessment, fraud detection and cyber policy review. Training effort will be required to have quality and consistent performance within highly specific domain use cases."

For enterprise AI advocates in the survey, the top concern with generative AI is security, cited by 73%. Lack of truthfulness is another issue, cited by 70%. More than half (59%) express concern about intellectual property and copyright protection -- particularly with LLMs such as GPT, "trained on wide swaths of information, some of which is copyright protected, and because it comes from publicly available internet data," the report's authors maintain. "It has a fundamental garbage-in, garbage-out issue."

Also: How to use ChatGPT to write code

AI might reduce the need for human resources in specific tasks but, ironically, it is going to require even more people to build and sustain it. More than four in ten (41%) AI advocates express concern about a shortage of skilled professionals with expertise to develop and implement enterprise generative AI. 

More than a third (38%) of survey respondents express concern about the amount of computational resources required to run LLMs. Infrastructure, such as powerful servers or cloud computing services, are needed to support the large-scale deployment of language models, the report's authors state. 

Enterprise adoption of language models requires careful planning and consideration for a range of factors, including data privacy and security, infrastructure and resource requirements, integration with existing systems, ethical and legal considerations, and skill and knowledge gaps.

Also: AI is more likely to cause world doom than climate change

As with any emerging technology, successful adoption depends on use cases that demonstrate a significant leap over previous methods. There are some solid use cases for generative AI, as explored in the survey: 

  • Human-computer interaction: Enterprise language models will serve to provide end users and customers "with quick and easy access to information and support, such as product details, troubleshooting guides and frequently asked questions." The most prevalent use cases at this stage are chatbots (54%), question and answering (53%), and customer care (23%).
  • Language generation: "Generative AI can write new content, create realistic images, generate marketing copy, compose music and even generate programming code." The two most popular examples at this time are content summarization (51%) and content generation (45%).
  • Information extraction: The top use cases here are knowledge mining (49%), content classification, and metadata creation (38%). Content categorization for routing (27%) and entity extraction (20%) are also mentioned.
  • Search: General search (39%), semantic search (31%,) and recommendations (29%) are seen as "important tools for helping people find the information they need quickly and accurately, without having to look through lots of irrelevant results."

While many enterprises might be seeking to adopt enterprise LLMs, most AI advocates in the survey advise caution with proceeding with AI. Almost three-quarters (71%) agree that government regulations are required immediately to deal with legitimate commercial AI use and malicious use. AI and LLMs "can have significant ethical and legal implications, particularly around issues of bias, fairness and truthfulness," the report's authors warn. 

Editorial standards