X
Innovation

AI augments and amplifies human cognition

"AI's role is to augment," says IBM Watson's CTO, Rob High. It assists and amplifies. "We need to be thinking seriously about whether AIs are really being deployed that way."
Written by Tonya Hall, Contributor

Video: Ethics and responsibilities: Should limits be placed on tech?

Watch the video interview above or read the full transcript below.


Tonya Hall: Amplifying strength and extending reach. Hi I'm Tonya Hall for ZDNet and joining me is Rob High, Chief Technology Officer for IBM Watson. Welcome Rob.

Rob High: Thank you Tonya, appreciate that.

Tonya Hall: What is your role exactly with IBM Watson?

Rob High: I have been the CTO for IBM Watson. As a result of that, my responsibility has been to drive the technology strategy. Of course, I do some evangelism, as we're seeing here today, but also looking at the tentative vitality of our skills and make sure we got the right people on board that will facilitate the creation of this thing we call AI.

Tonya Hall: You just spoke at Mobile World Congress in Barcelona on the topic of "AI Everywhere, Ethics and Responsibilities." What does that mean? Talk about your presentation.

Rob High: Well, if I can preface this with just a short assertion about what I think is the role of AI. And that is that AI is really about augmenting and amplifying human cognition. And what that means to me, is kind of picking up for where we, as humans, kinda leave off.

I mean, there certain things that we're really good at, as humans, and there's certain things that we fail at. We're not really good at reading large quantities of literature in a day. And, you know, we could, we can't really assimilate all of that and remember or see the patterns of information that are meaningful to us.

So, you know, if these AIs are gonna be useful to us, they're gonna be useful to us because what it's doing is helping us make better business decisions, or help us see different perspectives, or help us see through our own biases, and from that, generate better ideas.

So, AI's role is to augment. It's to assist us and amplify us. And so we need to be thinking seriously about whether AIs are really being deployed that way. Whether, in order to do that, they're making use of information about us that is relevant to the context of our discussion, but yet could also have the potential of being siphoned off from. People are concerned about that, and users are concerned about their data being used in inappropriate ways. Businesses are concerned about their data or the data of their clients being hi-jacked and made use of in inappropriate ways. There is sort of a larger dystopian view that we see out there sometimes where people think of AIs as being something that might rise up and take over. All of these are things that it's never too early to start thinking about.

And understand both how we enable technology to be useful, to be used for good. How to discourage it from being used for bad things, for things where people are being exploited or their information is being abused, and to make sure we have a good sense of what this technology is useful for.

Tonya Hall: So, is AI so good now then that the Turing Test is a thing of the past?

Rob High: I think the Turning Test kind of begs the wrong question in some ways. Cause the Turning Test was all about measuring whether the AI was able to fool other people into believing that the AI was another human being. In other words, it's really a test of whether the AI is replicating the human mind.

Read also: Nvidia aims to extend its lead in AI

And, in many ways, AI is not about replicating the human mind. Frankly, we've got plenty of human minds out there already, and from an economic standpoint, replicating the human mind is probably either not useful, but it's certainly nowhere near as plausible in terms of the current technology.

So rather than focusing on that, what we ought to be thinking about is what can AIs do to augment us? I like to call it Augmented Intelligence, not Artificial Intelligence. It's intelligence in a form that is picking up for where we leave off, but really focusing narrowly on specific areas of skill.

And so when we think about it that way, the Turning Test almost becomes irrelevant. What we need to be thinking about is, is it in fact benefiting people in the decisions that we need to make? Is it helping us through the mundane tasks of our jobs, so that we can really perform the rest of our jobs better?

Tonya Hall: So are there limits then to whether it's Artificial Intelligence or Augmented Intelligence to what it should be allowed to do?

Rob High: Well, everything we know about AI today is limited. We should not presume as being anywhere near the generalizations that we would normally associate with the science fiction of AI. AI have no sense of self, they have no imagination, they have no way of even questioning themselves. They have none of the characteristics that we would normally associate with humans. That's not just a statement of whether we need to limit them, that's just a fact of where we are with the technology, and I think to some extent ... Yeah, economically, it's not that interesting and useful to go build AIs that do something on a broader scale around general intelligence.

I mean, I kinda put it the same way that we think about every other technology. If you go back to the entire history of the human species, what you're gonna find is all the technologies that we've created have been formed as tool that essentially had one or two characteristics. Either they amplify our strength or they extend our reach. You know, everything. Hammers, screwdrivers, shovels, hydrologics. They all have the property of amplifying our strength or extending our reach, and that's really been the nature of what is economically interesting about all those tools.

And the same thing is true here. We gotta be thinking about AI as a tool that amplifies our intelligence, or extends the reach of our intelligence, to benefit what we do, to benefit how we think. And that's not just a function of what's possible or plausible about the technology, it's really a function of what's economically viable.

Tonya Hall: I remember when you introduced IBM Watson on "Jeopardy." What was it about six year ago? And so, six years in technology terms is a long time. Are we, as humans, adapting to and embracing the technology as quickly as technologists expected we would?

Rob High: Before I answer your question, "Jeopardy" was actually aired live on the TV in February 2011. So seven years ago. And it was August 2011 that we realized that there was something to that technology that warranted creating a business value proposition. But to answer your question, yes, people have adapted to AI.

Read also: Kakao adds voice calls to AI speaker

AI is now starting to surface in the form that we think about them. That is, in the form of interpreting and recognizing what I call the human experience. Interpreting the things that we say, and the words, interpreting the things that we see into objects or identify the objects we see in those images. Interpreting and recognizing our intention as we express something. What was it that we were really trying to intend?

Those characters, those examples of AI, are really quite ubiquitous and they're actually much more common than we're often aware of. Whether that is the form of some products that you may we're familiar with like Siri and Google Home, and, more recently, Apple's Home Pod or previously to that their Siri. All of those are doing voice recognition. It's actually more common that many of the times when we call into a call center and you hear that recording saying "This call is being recorded for security, for improving our service." What's happening in the back end, so taking this from recordings and transcribing them automatically.

So, to some extent, we've already sort of assimilated the application, the adaptation, of these AIs into things that we do without really being aware of it in some ways. Or in some ways we're aware of it, but have gotten accustomed to it.

So in that sense, yeah. I think where there is more room for us to adapt and for us to get accustomed is in figuring out when and how to apply these AIs to our own decisions. And this we don't get exposed to as much. Yes, there are voice assistants out there that we can use to ask questions like: What's the tallest mountain in the world? Or please turn on my lights. Or order me new dog food. But those aren't really affecting our decisions. They're giving us information, but I like to say ... You may have heard me say this before in other situations.

But if I say something like what's my account balance? That may be something that I need to know but that's not really my problem. My problem is I'm getting ready to buy something, or I'm trying to figure out how to save up for my kids education, or something behind the question, and AIs have the potential to engage us. What we call "conversational agents" and I use that term to kind of distinguish them from chatbots, which are more like the simple things that we see today.

Read also: Three out of four believe that AI applications are the next mega trend

Conversational agents, I think, have the greatest utility for us when what they're able to do is interact with us to get behind that first question and realize that there's something deeper to what we're trying to solve that they can facilitate in the process of engaging us. When we start to see more example of that sort of thing, then I think we're going to see not only a greater utility being delivered to us, but also, we're gonna have to think about, well, how do I make better decisions about what I'm buying? How do I get beyond my limitations today when I'm trying to decide this product or that product. Which bicycle to buy? At least for me, when I'm out thinking about something like that, I go through a few reviews. I may look at what other people are saying about it, but then, 15 minutes or a half hour, or if I'm really being diligent, maybe an hour of looking at these other reviews, it just gets so confusing that I basically give up and say, well, this one feels good. Right?

Well, it doesn't have to be like that. There's ways in which AIs can facilitate our decision-making process there that really are beneficial. So now, the question is, are we willing to adapt there? Are we willing to accept that and make use of that so we can find the bicycle that's right for us.

Tonya Hall: AI is being programmed by humans, and as humans, we are fallible. We make mistakes and we aren't always right. So what standards need to be created and enacted to ensure AI doesn't become our worst nightmare?

Rob High: Yeah, and this is part of what we talk about when we talk about the ethics of AI. And to be clear, when we develop an AI-based system, we're not literally programming it in the sense that we used to mean, where we're trying to decide a bunch of "if-then-else" statements, these conditions, it's like when these accrue they do that. Which, I think, from an ethical standpoint, in the context of what we're talking about, really does evidence. Right? That was a choice that a single programmer was making. A single programmer decided what were the conditions that should be asked and answered in order to come to a conclusion?

With AIs, we're training. We're really not setting "if-then-else" statements, we're basically creating training models that are taught, by a collection of data, where that data kinda represents prior examples. So if we're trying to teach Watson how to recognize the intent of what somebody is asking, we're gonna give Watson five or 10 examples of what other people have asked that meant the same thing, that really expressed the same intent, and from that, Watson will learn how to recognize that intent, even when somebody says something even slightly difference because of the way it's been taught how to recognize that intent, it will continue to understand that intent even if you say it differently.

Read also:10 things you should never do in Excel

But that goes to, well, the data. So whose data is it? Does that data really represent the demographics of the population you're trying to serve and the way they might go back to expressing a question like that? Does it represent the preferences? Does it represent a certain bias that some group may be representing within that training data? And these are things we have to be very diligent about. When we're setting up an AI, the first thing we have to do is look at the data that we're using to train the data and make sure that it's properly representative of the breadth of the population we're trying to serve, the way that they think, the way that they express things. The way that they would recognize something in what they say.

Tonya Hall: You know, this is a really interesting topic and as AI becomes more and more prevalent, we really do need to look and ensure how we protect the data, and I know that you guys are doing that at IBM Watson. I know you speak a lot. In fact, if somebody wants to follow you or find out more about what you're doing, how can they do that?

Rob High: So, I'm on Twitter, at @rhigh. R-H-I-G-H. I got on to Twitter actually pretty early, so I got a pretty simple handle there. And I'm also at LinkedIn under Robert High. H-I-G-H.

Tonya Hall: Alright. Well, that's again for joining us, and if you want to follow me and more of my interviews you can do that right here on ZDNet or TechRepublic. Or maybe find me on Twitter. I love to tweet at @TonyaHallRadio or find me on Facebook by searching for "The Tonya Hall Show." Until next time.

Related stories

Editorial standards