Making AI communication more human

We bring science to human communication, says StarTek's Dr. James Keaten. StarTek monitors the impact on customer experience.

Humanizing AI communication: What's needed to make IoT devices sound better Dr James Keaten, professor of Communication Studies at the University of Northern Colorado chief scienceoOfficer of Startek, identifies key elements that make communication "human" and discusses what AI scientists have to do to make chatbots and IoT devices better at sounding completely human. Read more: https://zd.net/2ICaFLS

ZDNet's Tonya Hall takes a look at "key elements that make communication 'human' and what AI scientists have to do to make chatbots and IoT devices better at sounding completely human," with Dr James Keaten, professor of Communication Studies at the University of Northern Colorado and chief science officer of Startek.

Watch the video interview above or read the full transcript below.


The Next IT Transformation

What you need to know before implementing edge computing

These are the questions your firm should ask before going down the route of edge analytics and processing.

Read More

Tonya Hall: Will we ever teach chat bots and IOT devices to be more, well, human? Another look at AI. Hi, I'm Tonya Hall for ZDNet and joining me is Dr. James Keaten. He is a professor of communication studies at the University of Northern Colorado, and Chief Science Officer at StarTek. Welcome James.

James Keaten: My pleasure to be with you.

Tonya Hall: So Chief Science Officer, what's your affiliation with Starfleet and the USS Enterprise exactly?

James Keaten: Well you know, I thought since it's StarTek like Star Trek and Chief Science Officer, that technically makes me Spock, so I am very happy.

Tonya Hall: Wow. Okay, that's a great interview. I'm interviewing Spock, okay.

James Keaten: Yes, only logical.

Tonya Hall: Seriously, what does StarTek do, and even specifically, what does Ideal Dialogue do inside of StarTek?

James Keaten: What we're trying to do is bring science to human communication, because in the contact center industry, most of what we've been dealing with is a lot of conventional wisdom and not a lot of empirical testing of what's going on. So, you'll go to conferences and you'll hear people talking about various strategies, but no one's really showing you systematic data that shows what's the impact on the customer experience.

So part of it is bringing social science from the past 60 years, some of what we know about human conversation, and applying it to the business context. That's part of it. The other is using the rigor of scientific method to really try to generate data that is somewhat valid, because there's a lot of variables when you're dealing with human conversation, so you need pretty high-level modeling to say, "This was the impact of this conversation," rather than the product or the price or whatever else. So you need some pretty laser-focused science to get at the human variable.

Tonya Hall: How do you actually go about scientifically measuring the quality of human-to-human conversation today?

James Keaten: So what we usually do is take a team of analysts that have no idea about the customer experience. They have no idea what a customer's said in satisfaction surveys or verbatims. Then we have them look at a rubric of various communication behaviors and functions, and then we match it and correlate it to the customer experience to ask the question, what's really driving a good or a bad experience? So we're using double-blind experimental designs to try and get at the actual customer experience.

Again, it's not easy, because sometimes a customer will come on and will be upset already and have a predisposed notion. The agent won't even have a chance to somehow alleviate the customer's concern or frustration. So that makes it very difficult to isolate, what is the contribution of that person, at that time, toward their experience? So it's very, very tricky. Humans are messy, and so it's tough for us to really come up with a fairly precise set of modeling.

Tonya Hall: Well, that leads me to the fact that we are messy and we are complicated. There's a pair of technology trends, if you will, that are emerging today, voice-interaction and artificial-intelligence. What are the most important points that AI scientists must understand as they work to help machines try to have conversations with humans?

Read also: Google is using machine learning to make ads even more personal across platforms

James Keaten: Okay, a couple of things, and, first of all it's a great question.

When we're dealing with AI, what we're finding now is the low-hanging fruit is the best application for AI. For example, if you just received a credit card in the mail and want it to be activated, most individuals don't even want a human. They just want a very quick, transactional experience to get off the phone. But let's go with the opposite end. What about a non-winnable situation? Say it's a cable company and the network is completely down. Hearing a machine tell you, "I'm sorry, the network is down," is not gonna be enough to really resolve that. That's one issue.

Another issue is people have different conversational needs. Sometimes they want somebody to tell them, "It's okay, this is complicated. You're not stupid, we all have to deal with this. I have eight weeks of training just to figure out the remote." But for a machine to say that, I don't know if that need is gonna be met to say, "You're okay, it wasn't you. You didn't screw up." Because there's something about a human talking to another human that gives us a sense of identity and image that I don't think a machine can provide for us.

Tonya Hall: So if AI were able to detect meaning behind a statement like, for example, "I tried to check my account balance on the app, but I couldn't figure out how to do it exactly." How might that actually apply to one of those types of situations?

James Keaten: That's a great question, and it's a tough one, because if you think about it, our social identity is constructed by talking to other people. There's no research that indicates that our identity can be influenced by talking to bots. So the need for assurance, I don't think, can be satisfied by a machine at this point, unless the person is actually convinced the bot is a machine. It's something about that human communication. There's a part of our brain known as the limbic system that when we feel as though another person understands us, we tend to alter our physiological response to them. Machines don't do that, and so the whole notion of empathy, if a machine attempts empathy, I think it's gonna be perceived as disingenuous and it might actually be counterproductive.

Tonya Hall: We focus a lot on chat-bots today, and businesses are being told that they should focus more on technology to handle their customer service needs. Are we too soon to the fight with this? Should we still be focused On human-to-human?

James Keaten: Yes and no, yes and no. I do think there are a number of issues that can be resolved through an interactive voice-response or through AI. The question is what happens when we get to highly complex-issues where I need somebody to essentially guide me and understand what my responses are and adjust to them?

So AI is still limited when it comes to adaptation of messages. So does the AI know how to adjust the rate? Does the AI know how to adjust the language choices? Does the AI know how to preview what's coming up so the customer can understand? Because say we have an eight step process, does the AI know that that's gonna confuse most people? So you have to impose landmarks, you have to impose a lot of structure in there so people understand what's gonna happen and what their role is. So at this point, AI is being used primarily as information exchange, so a lot of the meta-communication, the communicating about communicating, is not provided by AI, which can cause a lot of confusion.

Tonya Hall: So will AI ever be so good that the Turing Test becomes obsolete?

James Keaten: Well, you're gonna have to ask by the time ... Will I be alive for that? Because I do think it's gonna happen. I do think once machines can teach themselves communication and monitor human communication, there'll be enough pattern recognition that we can actually do that. If we can use eigenvalues to map human faces, we can probably come up with the essence of communication to start to map functions like identity, relationships, goal necessity, conversational needs, et cetera. I think it can happen, I just think it's too early.

Tonya Hall: Okay James. Then, what might be the biggest challenge confronting AI scientists today as they try to achieve human-quality conversational ability from their machines?

James Keaten: I think it's addressing the higher order communication needs. For example, when something happens, how is it framed? One of the issues that Martin Seligman has found in positive psychology is how we frame an issue is central to how we see the issue. So, does AI know how to frame an issue that sounds empowering, that sounds optimistic, that focuses in on the possible? I don't think we're there yet, so it's easy for somebody to interact with a bot and have no idea what's gonna happen and almost develop a pessimistic attitude. That's one thing.

Read also: Amazon says Alexa device sales broke records over cyber holiday weekend

Another thing is the connection of rapport and convergence physiologically, the emotional content of speech, which is conveyed primarily through pitch, volume, rate, et cetera, is not gonna be present in a bot unless it's artificially created, which again, runs the risk of a disingenuous conversation.

Other things that can't really happen, face-saving is too complex right now. I have a tough time teaching it to my graduate students, let alone a bot. But face-saving is a very indirect and highly contextual form of communication, and you have to have a lot of context involved to know what really is needed. If you wanna know, probably the culture you wanna go to for the best examples of face-saving, go to Japan. The Japanese are taught from the youngest age how to recognize a face threat and what to do about it, but it's highly, highly contextual. Considering that bots right now are leaving out most of what's in the message, which is the prosodic information, the melody and the rhythm of the voice, as well as facial expressions, posture, et cetera, with that left out I don't know if they can get enough context to make an accurate assumption.

Tonya Hall: Conversational design is gonna continue to be a conversation we're gonna have as we get more and more of a deeper dive into replacing human interaction with technology. I really appreciate your insight on this, James.

James Keaten: My pleasure.

Tonya Hall: If somebody wants to connect with you, if they wanna follow you, find out more about your work, how can they do that?

James Keaten: Probably best, let's go to James.Keaten, K-E-A-T-E-N, @startek.com. On a show like this, I should also have my Twitter and all the rest, but I'm gonna go old-school here. Since I'm fighting the bots, I'm going old-school with email.

Tonya Hall: Hey, that's fair. I totally think it's fair. Although, if you wanna follow me and more of my interviews, you can do that right here on ZDNet, or you can find me on TechRepublic, or maybe, you know what, find me on Twitter.

James Keaten: There you go.

Tonya Hall: I'm @tonyahallradio on Twitter, or find me on Facebook by searching for The Tonya Hall Show. Until next time.

PREVIOUS AND RELATED COVERAGE

MITunveils SoFi: This Nintendo-controlled underwater drone swims like a fish

MIT's robot could give marine biologists a less distracting way of capturing up-close footage of sea life.

MITlaunches MIT IQ, aims to spur human, artificial intelligence breakthroughs,bolster collaboration

Perhaps the biggest takeaways from MIT IQ are that algorithms need new approaches and multiple disciplines and research areas need to collaborate to drive AI breakthroughs.

Phonerage: MIT startup has find-and-fix tech for call service frustrationVideo

The technology can also help identify depression symptoms, and it may soon empower machines to act more human.