Don't be alarmed, but you're probably using the term AI wrong

It's more than semantics. Understanding what AI is and what it isn't will help you predict how your industry (and the world) is changing.
Written by Greg Nichols, Contributing Writer

How machine learning and AI helps one digital agency create unique customer experiences

AI is a term that gets bandied about a lot these days. It's the capability du jour, the follow-up hit to "big data." But what does it really mean?

Luis Perez-Breva is a lecturer and research scientist at MIT's School of Engineering and the originator and lead Instructor of the MIT Innovation Teams Program. He's the author of Innovating: A Doer's Manifesto for Starting from a Hunch, Prototyping Problems, Scaling Up, and Learning to Be Productively Wrong.

He knows a lot about what AI is and how it will impact our lives going forward. He also knows a lot about what AI isn't. I recently got a chance to pick his brain, and hopefully clear a few things up.

GN: Let's start with what AI is not. In other words, where do you see folks marketing something that has AI, when in fact there's no intelligence in the product?

LPB: I see a rush to over-market as AI anything that involves computation with data. That just sows a lot of confusion. Things advertised as "AI" today have no intelligence of their own.

No matter how useful, these systems just use the AI toolkit. Just as a wrench you use to fix your car isn't a car, the tools for AI aren't themselves intelligence. The same with robotics. You have muscles and bones. But it isn't your muscles that make you intelligent; dinosaurs had them, too. Why, then, should a robotic arm that moves be considered intelligent? It's the very human ability to extend our capabilities with tools (e.g., a hammer, a car, a computer, ...) that is intelligent and makes us capable of innovation without the need for a firmware upgrade.

The conflation of the AI toolkit with intelligence dates to AI's origins. Many thought then that only an intelligent machine could ever beat a chess master. Now we know we were naïve. Teams of highly skilled engineers have used machine learning to program and train computers capable of defeating the world chess and Go champions. But as the field has progressed, we've come to appreciate that intelligence is the ability to interpret and explain the narrative behind a game, not mastery at playing it.

Emphasis on the AI toolkit may have created another problem: a misplaced hunger for data of any kind. But data is not intelligence, so it's absurd to think that the more data you have the more intelligent the system. Data can only guide or misguide. Meanwhile, many industries and businesses have bought into this data frenzy and place a disproportionate importance on data gathering. For example, the pharma industry has long treated the data they gather as some sort of immensely valuable magic and has created several data initiatives. A number of businesses now advertise systems as "AI" simply because they use a lot of data. But absent a deliberate approach to thinking through the problems we really mean to solve and the data we need, most systems advertised as "AI" are rather ineffective, often make problems look overcomplicated, and on occasion require humans to act robotically (not our strength) so data is preserved.

GN: Okay, so what is AI really?

LPB: Imagine a conversation with a machine to figure out a new way to solve a problem you care about, the way Tony Stark interacts with Jarvis and becomes the superhero Ironman. AI must help us reach further.

I'm inspired to create the kind of AI in which you work together with the computer to sort through the messy nature of a problem--which requires getting to the right questions. In a partnership that allows you to look at our surroundings differently, the AI and the human get there while learning from each other. As a result, you--the human--solve new problems and become more intelligent. New kinds of work and new jobs follow.

Today, to achieve that partnership you do all the work: learning to ask the right questions, learning about the problem and the disciplines it spans, and perhaps eventually scaling up a new technology. You can only address one narrow problem at a time. And you need to know the tools--machine learning, big data, data science, robotics, analytics, and so on. The computer just computes. All the confusion stems from a simple fact: the tools have progressed enormously, computers do impressive things, but the intelligence isn't really "artificial" yet, because humans input it all.

But AI already helps us solve problems we could not solve any other way. For example, 20 years ago we set out to solve the problem that emergency response systems couldn't find you when you called from a cellphone. Science says that in a world without buildings you can triangulate the location, but that doesn't work in our built-up world. Traditional thinking would be to make the physics fit the problem and build increasingly complex models to account for everything that makes our world not "ideal"--and drown in data along the way. But the problem was in the question. The tools for AI gave us a way out: your phone sees a different environment based on where you are. That's all a learning system needed. It works, it needs surprisingly small amounts of data, you get to use it, and it created new jobs.

With AI, you get to use knowledge a very different way to solve real-world problems.

If at all, I'd say today's AI toolkit proves how wrong we were when the field started. There's more to intelligence than computers playing strategy games. AI helps us get to the core of problems unimpeded by biases or the limits of our perception.

GN: So are we seeing any examples of actual AI? Where will we see it in years ahead?

LPB: We benefit from the AI toolkit every day. Search engines allow us to search in ways once unimaginable; Siri and Alexa take dictation; hum that pesky song and your phone will identify it (finally!); Netflix recommends movies based on algorithms that have never watched the movies; new chip architectures make vision and pattern recognition techniques broadly available. You benefit from the AI toolkit without even noticing it when you ask Waze to direct you, or when your health monitor beeps a certain way. Machine learning is a pillar of Google's strategy and without the AI toolkit you could not have become an Uber or Lyft user or driver.

All these are impressive but severely limited. You can search for information in myriad ways, but the "AI-system" mostly compares data other humans loaded. The "AI" understands nothing; it just communicates back a list of guesses. The computer acts as an intermediary. It can't answer why.

Still, I'm excited because these systems may have helped popularize the AI toolkit just enough that people long for AI that offers a new way to help frame problems and solve problems, like we did for locating cellphones. That's when the AI toolkit is at its best.

To see the difference, consider how investors typically use "AI" as a crystal ball to predict movements in the market or as a glorified calculator with "data analytics" to fit a worldview. Now imagine that you could engage the computer in a slightly different conversation that addresses the questions you actually care about: How might I actually make money? Create jobs? And imagine that the conversation got past the simple statistical or traditional investment strategies to reveal new strategies unconstrained by traditional thinking.

I see a paradigm shift emerging that opens the AI toolkit to a broad range of industries that lack the kind of ready-made questions that may seem obvious in other data-rich environments. That will be AI at its best: a conversation with the computer to solve a real-world problem. Imagine AIs that help engineer resistant crops without genetic manipulation; suggest opportunities for new drugs from inspection of all past clinical trials; help choreograph distributed power generation that would make it easier to restore power after natural disasters, and maybe even allow for making money from saving power; or suggest you recombine old intellectual property to solve a new problem.

GN: There are lots of high-profile people sounding the alarm about AI. Concerns range from job security to the outright obliteration of humanity. Condensing a response into a Q&A won't be easy, but can you speak briefly to the jobs question first?

LPB: We humans are better at building futures than we are at predicting them, so I take these "alarms" as an invitation to look for real-world problems to solve and create a different future than in these dreadful forecasts.

What if we're just getting a glimpse of how the minds of these high profile voices work? the thinking that moves them to invest time and resources in a different kind of AI than the one they predict?

The discussion about lost jobs isn't really about AI. I think these high-profile people conflate AI with the way management has used automation. AI is not automation. We may have given modern managers more tools to get rid of humans than to seize new technologies and help advance their workforces. Letting go of lots of humans after automating kills jobs and may even kill companies--the U.S. Postal Service famously automated and cut jobs in the 1990s, only to be impotent to compete with UPS when the golden era of parcel delivery began to kick in a decade later--but it is not AI that costs those jobs. Perhaps it was data-driven and process-driven management that cost those jobs.

It is up to innovators to discover how AI can change our daily lives and create new kinds of jobs. In doing so, they'll show blaze a path for businesspersons and entrepreneurs to follow.

Perhaps we could replace shortsighted managers with today's "AI". Perhaps giving business leaders tools for real-world problem solving and innovating with technology is a better idea altogether.

GN: How about the notion that AI systems will become independent and could threaten our survival?

LPB: That seems like the plot of a sci-fi movie we've already seen. Admittedly, there is an overabundance of dystopian movies in which the bad guy is a robot. But why choose the Terminator over R2-D2?

I think people are projecting their fears about other things: perhaps the creep toward process- and data-driven management (human managers acting robotically?), the byproducts of rampant social media (incivility and opinions swayed through misuse), and so on.

Still, the notion of a warmongering AI seems far-fetched. It would require Artificial Intelligence, Artificial Awareness, Artificial Consciousness, Artificial Instinct, and--as biology suggests--reproductive ability, self-sustenance, survival instinct, a critical mass population of evil robots, and, in essence, the development of artificial life that's intelligent and feels we humans pose a threat to it, with all that entails. Philosophizing about new ways in which the end of the world may be nigh just because a robot won at Go doesn't make any of it real. And remember, your iPhone runs out of charge in about a day. It takes your brain less energy than that to take you through an entire day. So sit tight, because even though the killer robot may be hiding in plain sight it may just run out juice.

But maybe I'm wrong. Maybe AlphaGo Master is secretly plotting to wipe out humans and spend the rest of its days playing Go against itself while the bison get to thrive once again on the American plains.

Personally, I'm most worried about humans misusing the tools of AI: prioritizing quick wins; enacting cost-saving measures that hurt in the long term; or tampering with social media and, more, broadly society. Meanwhile, over-marketing any computational tool as AI distracts us from what we can accomplish, just as a decade praising startups has distracted us from the objective of building companies that actually survive.

I do spend significant time exploring ways to solve real-world problems with AI because new kinds of jobs will follow from that, not just from building new companies or from over-marketing.

GN: You're expert on the concept of innovation, which is relevant to the AI conversation. How will AI affect the pace of innovation, and what will that mean for we citizens of the world?

LPB: I think AI is going to revolutionize how we, humans, innovate.

Looking ahead, AI will offer us a new way to interrogate our surroundings. We've spent centuries making sense of reality with a way of thinking born out of pencil and paper and trying to make increasingly complex predictions based on that approach, which required that we become skilled at some disciplines and learn the models before we could start building off of them. That has also constrained us.

What if there was another way? What if you could work backwards from a problem you care about and let your understanding of the problem guide what you need to learn so you can figure out a solution that works at scale? It is a different way to acquire an education empowered by your desire to make a problem go away. AI is poised to enable that. AI gives us ways to work directly on the questions we want to address, regardless of whether we have or can even come up with a model for it by traditional means.

AI is going to allow us to think about our education in a more fluid way--as learning that we acquire continuously to work on the next thing. It will not replace teachers; rather, it will guide you to understand what you need to learn next, just as the results your search engine returns trigger ideas for what to search for next.

That's what I've been working towards. Early in my career I became aware that furthering the AI toolkit alone wasn't enough, to fulfill the aspiration for computers that interact with us more "intelligently" we also needed to learn how to define real-world problems meaningfully. The outcome has been a book but also the realization that in in the medium to long term AI can level the playing field by making it easier for anyone to innovate, problem-solve, or invent their job starting with what they have.

GN: Will artificially intelligent machines (beings?) approach innovation the same way humans do? In other words, are we giving AI the innovation roadmap that we humans follow, the step-by-step guide, or will AI develop new tools to innovate that we humans don't have?

LPB: We're not building a new species. We're evolving our own the same way the prehistoric axe got our species metaphorically out of the woods. You'll be able to address problems way beyond your current skills, supported by a computer, and learn about the problem as you go, the skills required, and how to solve it.

You'll build a narrative with the computer for how to go about solving real-world problems. Here's how it may work. Imagine your search engine was capable of answering questions more elaborate than keywords allow for. Imagine it answered with actual information and meaningful suggestions you can follow up on either with actions or more questions. That's how you build a narrative to solve a problem and figure out what else you need to learn.

AI is not a replacement for humans. Innovating is about real-world problem solving. In the future, it will still be up to us humans to innovate. The challenge ahead is to figure out ways to allow everyone who wants to innovate to do it beginning with whatever they have while also using the most advanced knowledge we have so they can succeed. AI can become a way to make that possible. But it may take shaking away a few preconceived notions: if you, a human, believe the road to innovation is paved with a step-by-step guide, a recipe, a stage-gate process, or guessing a product before you even begin, chances are you aren't truly innovating but rather posing as a robot. The results of that sort of "innovation," whether by humans or by machines, are rather underwhelming.

GN: Who should read your book? Why?

LPB: You wake up one morning and decide you don't want to wait and see whether those made-up "AI" futures will happen. You resolve to turn those "alarms" into a hunch for how to build a better future. You check your standard innovation library, which tells you that your to-do list should say: "Have an exponential idea that's about a product, known to be disruptive and scalable before I start, and talk to 100 users before hacking a barely viable something together. Calculate market size."

You ponder whether robots may have succeeded at just abut anything well before you're done completing that to-do list.

Read my book if you'd rather try and build the future instead of hypothesizing whether your product idea is good enough or second guessing whether you even wanted to do a product in the first place.

You may want to take a look at my book if you believe innovating can be a skill that applies everywhere (from policy to commercialization), that you can get better at through practice, and that you can start practicing with what you have and know today.

Readers tell me that whether you're a manager trying to solve the innovating conundrum in a large corporation or just a curious mind considering taking an entrepreneurial leap of faith, chapters 1 and 6 will speak to you.

Oh, and by the way, my book is beautiful. It has gorgeous artistic illustrations that alone are worth more than the cover price of the book. It's written in straightforward English, without jargon, for doers. And yet, it has also passed peer review.

Most important, it offers you a very different way to think: innovating as real-world problem solving.

Editorial standards