At IBM AI conference, definition of what AI is and isn't in flux

At its user conference in Florida, IBM offered to demystify the term AI. The effort seemed only mildly successful, but the company nevertheless conveyed a lot of energy and drive behind its various offerings that pertain to machine learning and that may be enough to excite its customers.

How IBM Watson powers Acoustic's AI Mark Simpson, CEO at Acoustic, talks to Tonya Hall about how open platforms and data security work within Acoustic's AI.

Rob Thomas wants to "demystify" artificial intelligence. The general manager of IBM's Watson AI technology, and a 20-year veteran of IBM, Thomas has written numerous blog posts of late as well as articles in popular publications such as Forbes on the matter, and a white paper, The Ladder of AI, which has been praised by none other than tech luminary Tim O'Reilly

This past Tuesday in Miami was the capper. In front of more than 1,700 attendees, many IBM customers, Thomas took the stage at the Intercontinental Hotel where he proposed to lay out "what is AI, really."

The reason to explain stuff is that few companies are doing anything with AI. Research from Gartner says that only eight to fourteen percent of companies are "getting familiar" with AI, a figure Thomas cited frequently. Those numbers suggest most companies may be left behind out of sheer ignorance as the world moves to an important new class of technology, he suggested.

Thomas made a valiant attempt to unpack the "black box," as he put it, but it's unlikely attendees left with much of an idea what AI is. The references during the day were rather muddled, vague, and at times contradictory. A session in the morning, "Everything you wanted to know about AI but were afraid to ask," featured breezy, jokey references to terminology that probably did more to confuse than to clarify. One slide had a cartoon of a rat in a maze trying to find cheese as the definition of "reinforcement learning." That probably did little to communicate the notion of conducting a search through state-action pairs and forming both a value and a policy network, the key concepts in RL. 

ibm-rob-thomas-2019.jpg

Different definitions of AI may be a "difference without a distinction," says IBM's Rob Thomas. What's important is for companies to get moving on AI or get left behind, he says. 

(Image: Tiernan Ray for ZDNet)

Thomas's own references to AI, stuff such as "making predictions," were broad and general, and could include many things that consist of mere statistics as opposed to artificial intelligence. 

It may not entirely matter because for IBM's purposes, Thomas and his team probably succeeded in conveying that they have quite a bit in the way of AI-related offerings that may help businesses. They may have piqued customers' curiosity and that's what matters. 

Also: IBM is working on an alternative to AIOps

The most important announcements included IBM saying it has completed certification of Watson, its collection of AI applications, on top of OpenShift, a version of the Kubernetes application container management platform developed and supported by Red Hat, a company IBM acquired this year. 

Certification is a major milestone. It means IBM customers can be guaranteed the ability to run programs such as the Watson Assistant, which can interact in text conversations, across multiple cloud computing platforms including those of IBM competitors Amazon, Google, and Microsoft. That means IBM can make an argument that it is the only vendor providing a suite of tools and applications pertaining to AI and big data that will operate on all clouds, not just different data centers. 

Thomas was upbeat about IBM's position in an interview with ZDNet, saying the company has more than twice as many reference-able customers using its AI technology as its next closest competitor. "IBM forged a path in enterprise AI with Watson a couple of years ago, and now we are entering a level of maturity" with the technology, he said. IBM has "a lot of momentum we don't always get credit for."

Asked about the somewhat mixed definitions of AI at the conference, Thomas conceded there will be differences of opinion over what's included in the definition. But he also was vigorous in defending the company's product offerings as making substantial use of AI. 

ibm-reinforcement-learning-slide-2019.jpg

Slide from an IBM talk, "Everything you always wanted to know about AI but were afraid to ask," meant to illustrate the AI phenomenon of reinforcement learning as a rat in a maze.

(Image: Tiernan Ray for ZDNet)

For example, when ZDNet suggested the Watson Assistant is merely a chatbot, and therefore something many would not consider AI, Thomas took exception to that assertion. 

"It bothers me when people use the phrase chatbot, it's not a chatbot," he said. "You and I could launch a chatbot next month, it's just a rules-based interaction model." In contrast, "Watson is a virtual agent at the core of which is an intent classification model that uses machine learning to come to understand intent."

The more data you feed Watson Assistant, Thomas pointed out, "the better it gets. If you feed it data and it gets better, that's the core of AI."

Also: IBM offers explainable AI toolkit, but it's open to interpretation

When all is said and done, the nuances of defining AI might be "a distinction without a difference," he said. Any company that wants to get started can simply grab case studies of corporate projects in AI and just copy them. "The best things in the world happen when you go and copy stuff."

The important thing was to get CEOs and their teams to start along the path of working with the technology, given how few companies currently do. 

"I'm not sure CEOs need to know what it is in order to inspire creativity," said Thomas, meaning, knowledge of AI.  "A CEO can challenge their staff to do a hackathon," he suggested, just to get people working on projects in AI. "Do a hundred projects of five to seven people for six weeks," he offered. "Get someone from the tech team, and get someone from the business team, and pair them together."

"There are very few topics that are CEO- and board-level," Thomas observed. "Security was one five years ago, cloud has kind of been one."

But "AI is that, it is existential," said Thomas with grave reflection. 

Thomas's deputies did a good job of hammering home the various strengths of the Watson tools in a meeting with reporters on Tuesday afternoon. 

Daniel Hernandez, a vice president for data and AI, said the ability to run on OpenShift would be a big deal for clients who may not have lots of data scientists working for them. "If you had to train them all on multiple tools on multiple platforms, things become much more complex." That presumes, of course, that customers are rational and see the economic sense of being cross-platform. 

"Customers have proven to be rational in our case," said Hernandez with a bit of a smirk, citing big names who are clients of IBM's AI, such as KLM and KPMG.

Another deputy, Wes Chung, pointed out that IBM has continually revised its Watson applications as it has seen how they're used. For example, with Watson Assistant, the program has now been extended so that it can engage in dialogue with a person not only via typed queries but also as part of an interactive voice response system. "You don't have one system for chat and another for voice, you just ask your question," Chung explained. 

IBM has added the ability to mine a client's logs of interactions for what's relevant to Assistant, said Chung, because, as he put it, "a lot of customers were training for their domain, and they spent time with the same examples that are variants of the same questions" that other clients already asked. Hence, IBM can help cut down on duplication in implementing technology such as Assistant. 

Also: IBM AI researchers say 'what is the question' is the real question

One of the big announcements of the week was a new feature called "drift detection." Drift detection is the ability to see if a machine learning model built by a company is encountering radically different data once put into production versus the data on which the system was trained. The name is IBM's term, but it seems to be a version of what's classically referred to as "covariate shift," a common occurrence in both ML and plain-old statistics. 

An IBM diagnostic tool, called OpenScale, can now send an alert to a data scientist when such drift crosses a pre-set threshold. The point is to reduce things such as bias, by being mindful of where ML may be out of step with the data. 

ZDNet pointed out to Rohan Vaidyanathan, an executive who runs OpenScale, that the problem of bias is a deep one -- IBM has done lots of research on the topic but doesn't have concrete answers to eliminate it. Vaidyanathan concurred, but then made the opposite point: With at least some transparency, companies may actually put ML into production faster. It's rather like adding airbags to cars — they won't eliminate crashes, but they are a minimum requirement to get on the road safely. 

"They [customers] are able to get an idea of what bias there may be faster, so they can get to production faster," said Vaidyanathan, "instead of, you spent money on the model, and now you can't deploy it at all because you don't know in what ways it may be biased."

Also: Can IBM possibly tame AI for enterprises?

All this adds up to a very astute sense of practical issues by Thomas and his deputies. There was a feeling of energy and drive at the conference, a feeling the product leads are fairly excited about an ability to take market share from competitors in things such as natural language processing and statistical model building.

Does it matter that the definition of AI this week was all a bit fuzzy? It matters in the sense that some clients coming back home from Miami's gathering may still have no idea what AI is. They may be setting off rather blindly on a quest to get started with a technology rubric they barely understand. And that could be like the AI version of tilting at windmills, a quixotic journey that takes a lot of wrong turns. 

On the other hand, as the technology of AI itself is in flux, it may be better as Thomas suggests to jump in, start some projects, and see if some work and some fail. 

Let a thousand AI flowers bloom, in other words, is kind of the IBM AI message these days, whatever that may mean to your CIO, and maybe they're right.