Brent Leary is not only one of the foremost influencers and experts in CRM but also has become equally as expert and influential in the world of conversational interfaces. One of the reasons that he took interest to begin with was his expertise in Amazon's daily doings, to the point he and his bud John Lawson, another influencer in his own right, began the very popular Watching Amazon show on Facebook Live. Even more than Brent's influence is his general outlook on life which is to be as good a person as possible and he succeeds wildly. He's one of the most likable human beings you will ever run across and a genuinely great person to be able to call a friend. He's one of the best.
So, pay attention to what he says because as many Tuesdays as he possibly can, you will be seeing him here in addition to my content as always, doing a regular post on conversational interfaces and it will be called "Voices Carry" because as the 21st century continues to wind its way, they do and will do for a long time to come. So welcome to the first in the series.
Take it away, Brent. And Alexa, listen up.
I've been fixated on smart speakers since I got my first Amazon Echo device in November of 2014. And three and half years (and four echo devices) later, I'm even more fixated on the potential they have to change the game from a CRM/customer engagement/ customer experience perspective.
I can bury you in stats and figures to show you just how quickly AI-driven voice-first devices like the Echo and Google Home and others have captured the attention of consumers. In fact, I'll do that starting next week as I get further into this weekly series Paul has graciously allowed me to do here. But for now, here are some things I think are important to consider about where we are with this stuff, and more importantly where I think we're going with this...and how quickly we're moving.
Customer adoption is the most disruptive force in digital transformation
At the heart of most digital transformation projects being undertaken today is the need for companies to better position themselves to stay connected with tech-happy customers -- whether those customers are retail consumers or other businesses. And as fast as new technologies are being introduced today with the potential to disrupt the status quo, it only really happens when customers adopt it at scale...and at speed. And the technology that customers adopt at scale and speed tends to be things that make it easier to do things easier -- things that should have been easy to do all along.
And nothing is easier for us than to use our voices to ask for things, to make requests, and to communicate what's truly on our minds. This is one of the reasons why smart speakers - and the digital assistants that live in them - have come out of nowhere in just over three years to be in millions of homes already.
They're Not Smart Speakers - They're Interaction Platforms
I got my first Echo before it was generally available back in November of 2014 when it was offered to a few lucky Amazon Prime members. I didn't have a clue as to what it was, but it looked interesting and it was 50 percent off of the eventual list price. And after setting it up and trying it out a bit, I got hooked quick, fast and in a hurry... as illustrated in the YouTube video I made the day I got it.
Now, people can and are doing do so many things with their smart speakers, but it's because of the smarts in the speakers. And those smarts - aka the intelligence coming from AI/machine learning platforms in the cloud connected to the speakers - are being put in more and more devices all the time. And those devices are getting smarter and smarter all the time, because interactions between us and these devices is growing exponentially. And those interactions are more important to the usefulness of the devices than the number of woofers and tweeters in them.
This explains in part why speakers from Amazon and Google are selling like hotcakes, while Apple's HomePod - which does have a higher level of quality when it comes to the traditional aspects of a speaker - is struggling to catch people's attention. People are buying smart speakers for their smarts much more so than for high fidelity sound. Ironically, adding high fidelity to a speaker is also more expensive than adding the "smart" part. Even more ironic: Siri was the first voice assistant to hit the masses, long before Alexa or Google Assistant, but she has fallen well behind the other two in terms of capabilities.
It wasn't that it was great right from the start in answering a ton of questions, because it wasn't. But the promise was there to eventually be able to get quick answers just by asking, and not by clicking, typing or swiping. And the more time went on, you could see it getting smarter, and able to do more things. And also, as time went on, I found myself asking and doing more through it.
The bottom line here is that smart speakers are members of a growing smart ecosystem of connected devices that we'll be able to talk to wherever we are that are designed to make it easier for us to get more things done more easily. And if that happens, it will lead to increased interactions between customers and vendors through these devices.
Conversational Interfaces is the Peanut Butter to AI's Jelly
For the better part of the past two years, everybody is talking about AI. (As a side note I've been talking about AI since 1995, because Allen Iverson literally was the reason I bought season tickets for the Sixers back in the day, when he was talking about practice. But I digress...)
Every industry event has focused on what AI will do in terms of improving a company's ability to create customer experiences and extend/improve relationships with them. And with billions of dollars being invested in AI/machine learning, in order to gain the biggest bang from the bucks invested in this you need a way to communicate those insights back and forth between vendors and customers.
As easy as swiping, texting and clicking are, nothing is easier for humans than using their voices to ask for what they want. This is why the advancement in accuracy of providers of natural language processing/understanding technologies is helping to accelerate the translation of AI-driven insights to human understanding/consumption.
Think of this way: How many times have you wished you had a translator with you at a doctor's appointment when you're told what ails you. Or when you're given a legal document to read. Basically, the more natural the interface between the requester and the AI delivering the insight-driven response, the more likely the insight will be appreciated and used to create a more meaningful/positive experience. So, AI and voice conversational interfaces go together like me and Cherry Coke, or like peanut butter and jelly to make it more relatable to the rest of the world.
The Most Disruptive Forces in Tech Today are Fueling This Disruption
When you think of the most disruptive forces in business and technology of the last couple of decades, two companies that immediately jump to my attention are Amazon and Google. They have been primarily responsible for some of the most basic functions we carry out on the web - like searching for information, shopping, reading email...reading books, etc.
But these two are the undisputed driving forces in the fast rise to prominence of smart speakers/voice assistants. Amazon started the ball rolling with the Echo in late 2014 and has put the pedal to the medal ever since. Google got a late start to the party, but they have done a really good job playing catch up. Between the two of them - with their incredible resources, talent, and ability to shape entire industries - they have led a movement from ground zero less than four years ago to what will soon be a category that could impact almost every type of interaction we can have.
And the sharp rise of the category has forced Apple (who ironically started this whole thing with Siri in 2011 but for some reason almost completely dropped the ball) and other major players to quickly react and add even more fuel to the fire. And niche players are already building out platforms that will allow third parties to build devices and applications that can tap into these voice-first architectures to extend and further accelerate voice-as-an-interface computing even more. If it felt like this voice-first stuff has been moving rapidly and gaining traction, in my opinion you ain't seen nothing yet.
CRM Vendors, Objects in Your Rear View Mirror are A LOT Closer Than They Appear
I think it's safe to say that voice-first computing is rising with a vengeance. But, according to some of the vendors I've spoken with recently at various industry events, it's not on the front burner for them. In fact, a good number of them didn't have anything official they could point to in terms of this even being on their roadmap. Many of them point to a lack of urgency coming from their customer base as the reason for not moving this up on their development lists.
But the potential is there for a voice-first led digital disruption scenario to impact digital disruption projects now under way that didn't take this area under consideration during their planning stages. And yes, voice is a channel like text and messaging apps, which are also underneath the conversational interface umbrella. But I think voice adds a completely different dimension that might be hard to account for (and take advantage of) if you aren't specifically planning for it. Vendors who are able to anticipate the direction voice can take customer engagement and create as friction-free a development environment as possible for customers to quickly build experiences with...these vendors will have an advantage over those waiting for serious cues from customers before acting.
Oracle showcased an Alexa integration with their HR app they are working on with a large telecom customer (not mentioned during the demo) at their recent Media Day. The demo showed how employees can ask Alexa about how much leave time they have, how they can request time off, and how they can ask questions about why they got paid a certain amount during a pay period. To see more, watch the following video I took at the event:
Zoho has introduced Zia, their conversational AI interface focused on helping sales teams, that earlier this year debuted a voice interface. Pegasystems introduced their intelligent assistant platform that has voice capabilities. And I suspect you'll see CRM vendors pick up the pace as Amazon and Google plow even more investments into the voice-first foundations and ecosystems they have created. And niche players will begin making even more noise and help to infuse new dimensions into digital transformation efforts.
Early Days Mean Voices Can Carry Where They Shouldn't
With most technologies in the early adoption phase, some bad stuff is gonna happen, especially at the speed this voice-first stuff is moving. And just last week a report came out of a couple's Echo device mistakenly recorded (according to Amazon based on a perfect storm of events that took place) their private conversation and emailed it to a person on their contact list. Now that's about as wrong as you can go with this stuff. But, just like it happened before, I suspect the interest and demand for this from consumers will still hit the mainstream - as long as these types of situations are handled openly and honestly, and are completely fixed.
Thanks to Mr. Greenberg for allowing me to share my observations and experiences here as I dive even further into what's happening in this area, and the potential it has for shaping customer engagement over the near and long term. As the old '80s song from the group 'Til Tuesday says, voices carry. And that's especially the case with the voice of the customer, which is quickly becoming the main interface of the voice-first computing era.