Virtually every enterprise software vendor is creating noise in the market about artificial intelligence. Unfortunately, much of that marketing buzz offers little substance and creates confusion for customers about what's real. Given this FUD, the challenge for business people is deciding where to invest.
Although market confusion is an issue, the underlying reality is that achieving results with AI needs different strategies, skills, and goals than deploying traditional process automation software.
With traditional software like ERP or CRM, for example, managers re-engineer processes like customer service or manufacturing to find repeatable improvements and efficiencies. Although implementation is often complicated, the benefits and risks are well known.
In contrast, investments in AI demand a different kind of analysis than with traditional enterprise software. Not only is AI technology new to most managers, but getting the desired results depends on having sufficiently large and relevant data sets to feed the AI machine.
Because AI can create results that go far beyond process improvement and efficiency, defining investment outcomes and goals can be far more complex than with traditional process automation software.
Making successful investments in AI, therefore, requires experts across a range of disciplines to think in terms of frameworks and models. The activities include:
Analyzing the impact on current and future business models
Selecting processes and operations in which to invest
Examining machine intelligence technology
Rigorously applying data science to proposed solutions and outcomes
The skills and activities are significantly different than those needed when buying and implementing traditional enterprise software.
Given the importance, complexity, and risk around AI investment, I invited one of the most experienced AI investors in the world to be a guest on Episode 220 of the CXOTALK series of conversations with innovators.
I asked James to give enterprise leaders advice on how to invest in AI. During our discussion, Cham addresses points such as:
Avoid significant waste on AI projects that offer little value or benefit
Creating a useful economic framework for investing in AI
Understanding the shift from being data-centric to model-centric
Building, managing, testing model-centric AI applications
You can watch the conversation in the video embedded above and read the complete transcript on the CXOTALK site. You can also download the podcast on iTunes. Below is an edited portion of important points from the discussion.
How should business leaders think about the economic, organizational, and managerial aspects of AI?
We see innovation and advancement on the technical side. And what's lagging is clear thought and understanding on the economic and managerial side.
I think that the biggest risk for most of us right now around machine intelligence is less that the machines will take over and you will no longer have a job.
The biggest risk is that we as managers will make really bad decisions about where to invest, and we'll end up wasting billions of dollars on stupid projects that nobody ends up caring about. I think that, in some ways, is the immediate, interesting, obvious question ahead of us for the next 5-10 years. This is still a poorly understood and badly researched part of the question.
For the last couple of years, I've been asking various economists: "Tell me what is the right microeconomic framework for thinking about how to invest in machine learning or around AI?"
I think in general, most economists and most business school types are still more focused on the large-scale economic implications. But, those larger scale economic implications don't matter unless we make good decisions at a micro level.
There were three guys out of the University of Toronto, in their business school, who came up with what I think is the best framework for thinking about machine learning in general. I think that for most organizations, the right way to think about machine learning is to think about the cost of predictions. In the same way that if you were to abstract, at a certain level, computation. The history of computation is about reducing the cost of arithmetic. And, when you make it cheap to add and subtract at a certain level of scale, then you end up with digital cameras and whatnot.
And, if you think about AI or machine intelligence as being different, and think about it as reducing the costs of prediction, then you can apply the same mental framework as in normal economic analysis: "If the cost of prediction goes down, then what are the complements and substitutes to me? And what are the ways that I could change my organization at its core?" That's the microeconomic way of thinking about it.
It's fine to have a data-centric organization. But if you have all this data and don't know what to do with it, it's useless. It's good to have better workflows, but if the workflows just generally help you do the same thing over and over again, that's not that useful.
On the other hand, if you as an IT organization thought about yourself as model-centric, then you would consider all the processes you have inside the organization. Which processes are valuable enough that I would want to make predictions and decisions without people involved on a day-to-day basis?"
Those models are going to pervade the entire enterprise. That's the exciting part. [However,] the scary part is we have no idea how to build and manage them because these models are different than applications.
Building software is difficult, but at least I have some idea how to QA and test it and deploy it in some consistent way. As a culture, we figured out how to do that. On the other hand, we don't really understand models. For some of these newer models, we don't understand how to think about or introspect on them.
We don't really understand how to test them because, even theoretically, if the model were totally testable, you wouldn't need a model. And then we don't know how to deploy them in a consistent way.
Most organizations will need to understand where to build, invest, and manage these models.
What are the most interesting AI use cases that you see right now?
I try very hard as an investor not to get either too visionary or too optimistic about things.
It hits everything from things as mundane as looking through people's expenses to capture examples of lack of compliance. I'm an investor in this company called AppZen, which does this.
On the one hand, you'd say, "Gosh, James! This is a boring problem! Who cares about this?" I said that to the founder first. But then, the moment they look at how many cases of noncompliance you get in expense reports, it's tens of millions of dollars!
It's just like this little problem sitting on the floor that was not practical to deal with before because you'd have to hire lots of people or outsource it, which would be complicated.
But now, the little bots scrape through all the data, so the cost of prediction goes down dramatically. Suddenly, one of those nagging little things you were worried about in the back becomes something in the immediate present to solve.
The hard part is that we don't know, or we don't have good ways yet of predicting, how much these models, or these bots, will help the organization. We don't have good intuition around, "If I go after this problem, maybe I'll save this much money."
[But then, we can solve problems we were not even aware of] or thought were unsolvable. That's the exciting part.
In other words, business people need to gain a better understanding of data?
Yeah, we're also in this migration from a data world to a model world. The companies that do that best, or figure that out sooner, are going to be the ones that are going to be -- imagine all the buzzwords you love, like "agile," or "dynamic," or whatever ─ those good things.
The ones that are model-centric, and are smart about being model-centric are going to be the ones that are going to be successful.
Thanks to Christopher Michel for introducing me to James Cham and to my colleague, Lisbeth Shaw, for assistance with this column.