Amazon device event: Evolutionary, not revolutionary, and there's nothing wrong with that

While Amazon's various devices inspired a range of reactions -- from "awesome" to "nonsense" -- the event overshadowed a potentially even bigger announcement that the company made a day earlier.

Amazon's Echo, Alexa parade: What it all means Here are Larry Dignan's key takeaways from Amazon's 2019 hardware event and what it means for smart home integration, privacy and digital assistants. Read more: https://zd.net/2l96cua

Don't call it a comeback, it's just a return.  I've been gone for a while but looking to get back in the game, and last week's Amazon device event got me to thinking about how this one compares to last year. But, as I said last year, this isn't a blow by blow of every device, feature, and skill that was announced, as you can get that from the great wall-to-wall coverage here on this site. This is just a few thoughts from a CRM-ish perspective after taking some time to mull things around a bit.

Interesting numbers – those mentioned and those not

While all the excitement was around new devices, features and skills, there were a few numbers sprinkled in that caught my attention.  According to Amazon:

  • Customers interact with Alexa billions of times a week. Echo devices have received more than 400,000 5-star reviews globally.
  • Fire TV now has 37 million monthly active users
  • Ring Doorbells generate more than 4 billion interactions per month.
  • There are now more than 85,000 Alexa-compatible smart home devices
  • There are more than 100,000 Alexa skills
  • There are now "billions of dollars" flowing through the Alexa economy -- that's developer only.

Taken together, these stats point to more activities, more devices, more skills, more interactions, more active users and more attention being poured into the Amazon voice ecosystem. And usually, more attention means more money, as that last bullet point attests to. But, as usual, one number I've continually been looking for, to no avail, was any indication of just how much shopping is being done via Alexa. Now according to Adobe, customer journeys are being impacted by voice-first devices, even if the final transaction isn't being carried out through them. But I guess until the numbers are substantial enough to put out there, I'll have to continue waiting. In the meantime, it's apparent that people aren't just buying these devices, they are using them regularly, and for a variety of things.

Devices excite but features and skills extend

Of course, when the event is called Amazon Devices Event, devices are going to get most of the hype -- and understandably so. I've already ordered Echo Fire 4K TV stick and Echo Show 8, pre-ordered a set of Echo Buds and signed up for the invite-only Echo Loop ring. And I can see picking up a few of the other numerous things announced, but I am drawing the line at the Echo Frames -- mainly because they don't look good to me at all. 

But with that said, and even with my money spent on more than a few things already, last year's event felt more like a device deluge than this year. Maybe because my expectations were raised due to last year's inaugural event where it felt like it'd never stop introducing things. Or maybe because this time around it felt like some of the feedback led to some interesting new experiences and designs aimed at making us feel more comfortable (and secure) using Alexa to do more…even with our current devices. And to me, that shows just how important it is to Amazon to deepen our dependencies on Alexa and extend our use of the assistant across devices, tasks, and experiences.

The evolution will be vocalized

And while there were way too many new devices and skills to even try to mention, these new additions seem to be taking us down a path that travels in the same direction we are already headed. It feels like this year's event was innovation from a productivity, efficiency and comfort standpoint. It doesn't feel like a moonshot or 90-degree turn, but necessary infrastructure updates to accelerate adoption and deeper usage. Some of the more interesting examples Amazon highlighted would include privacy/security improvements like:

  • Adding utterances to give customers more transparency. You can now say, "Alexa, tell me what you heard," and Alexa will read back your most recent voice request.
  • Later this year, you'll be able to say, "Alexa, why did you do that?" and Alexa will provide a short explanation about her response to your last voice request
  • Offer customers the ability to opt-out from having their voice recordings be annotated by humans
  • Customers can now opt in to have their voice recordings older than three months or 18 months automatically deleted on an ongoing basis
  • Customers now can say, "Alexa, delete everything I said today," and "Alexa, delete what I just said" a few months ago
  • Customers will be able to filter their voice interaction history by device and review and delete voice recordings made on a specific Alexa device.
  • In November, a new "Home" mode for Ring cameras will roll out, so your camera is never recording video or audio at all when you're home

Alexa is also using AI to learn behaviors and to automate processes based on that learning, including:

  • Working on training local models to understand what sounds are correlated with different activities, so when you put Alexa into Away mode you will get notified when she detected the likelihood of activity in your home
  • When you say, "Alexa I'm off to work," Alexa will switch into away mode, lock your door, and turn on your exterior lights, all with a simple routine
  • Alexa will have hunches about things that need replacing around your house, and routines that you may want to set up based on your daily habits. The example used was if you set your alarm for 6 AM every day and then normally follow that up by asking for the weather and the score of last night's game. If you do that a few times, Alexa will understand and suggest that you may want to create a routine. All you have to do is say, "yes," and the routine will be created.
  • With Show and Tell, blind and low vision customers can hold up an item to the Echo Show camera and ask, "Alexa, what am I holding?" and Alexa helps identify the item through advanced computer vision and machine learning technologies with Rekognition.

Additionally, Alexa has picked up some more human-like capabilities, such as:

Is this AI-generated empathy? Now that could move things into the revolutionary category…beginning next year. But one new evolutionary thing that could be fun is having Samuel L. Jackson as the first celebrity voice for Alexa. He can tell you jokes, let you know if it's raining, set timers and alarms, play music and more -- all with "a bit" of his own personality. There will be two versions of his voice are available -- explicit and non-explicit. But if I'm going to pay $.99 for a skill to hear Sammy's voice on my Echo, why in the world would I choose non-explicit Sam? If I got this, I'd be wanting to ask Sam to say his Pulp Fiction speech on demand. Now can you imagine what a non-explicit version of that would sound like? But I digress…

Will loops and frames make up for lack of phones

It seems to me that many of the devices announced at this year's event were newer editions of earlier devices. But some were new for Amazon, and are meant to allow you to take Alexa with you out of the confines of your home.  These include:

  • Echo Buds which Amazon is looking to not only listen to the new Amazon HD Music service but also create a new shopping experience. For example, you'll be able to ask Alexa if Whole Foods has canned tomatoes in stock as well as where you can find canned tomatoes
  • Echo Frames -- They don't have a camera or display but give you all-day access to Alexa while on-the-go.
  • Echo Loop, the Alexa-enabled smart ring with two microphones, that pairs with your phone and lets you access information throughout the day 

The Buds, of course, will be competing with Apple Airpods. Frames have less direct competitors and Loop has even less competition if any. And even though Loop and Frames are new and/or in undersaturated categories -- with Buds being in a more competitive category -- will they make up for Amazon missing out on the most important mobile device of all, the smartphone?

eMarketer estimates that roughly 112 million people in the US use a voice assistant at least once monthly, making up roughly 40% of the population. Contrast that with the 81% of Americans who are very active smartphone owners and it shows that Amazon will have to sell a hell of a lot of Loops and Frames to make up for Alexa missing out on automatically being on that platform, like Siri and Google Assistant are. But coming up with new types of devices may be the only way to make up some of that ground, and creating an assortment of less pricey gadgets could lead to some real winners. It's not like it can't happen, considering it has been less than five years since the first Echo hit the market, and look at the impact that has had already. And Amazon has the resources and the determination to experiment with innovative tech that can lead to mini revolutions -- even if the gadget may initially seem quirky. But that's still an uphill battle, to say the least.

Potentially bigger news the day before the event

I look forward to these device event days because it is fun to see what new stuff is coming down the pike and to begin thinking about what impact it may have. Or at least for the reaction, these announcements generate, as they tend to range from "this is awesome" to "this is nonsense".  And possibly the heat generated from last Wednesday's device-fest may have obscured an announcement Amazon released a day before the event announcing the Voice Interoperability Initiative

According to the announcement, the VII is a consortium of more than 30 companies (including Microsoft and Salesforce) whose goals are based on the following priorities:

  • Developing voice services that can work seamlessly with others, while protecting the privacy and security of customers
  • Building voice-enabled devices that promote choice and flexibility through multiple, simultaneous wake words
  • Releasing technologies and solutions that make it easier to integrate multiple voice services on a single product
  • Accelerating machine learning and conversational AI research to improve the breadth, quality, and interoperability of voice services

The announcement also included a quote from Salesforce co-founder and chairman Marc Benioff:

"We're in the midst of an incredible technological shift, in which voice and AI are completely transforming the customer experience. We look forward to working with Amazon and other industry leaders to make Einstein Voice, the world's leading CRM assistant, accessible on any device." 

Leading these kinds of initiatives and partnerships may end up being just as important in helping Amazon play catch up on the mobile platforms Apple and Google (two companies not a part of the VII consortium) have at their disposal. 

And with Microsoft and Salesforce on board, Amazon can accelerate inroads into the enterprise software and B2B organizations as more traditional enterprise vendors looking to integrate voice into their offerings. One of those enterprise vendors, Oracle (another company not involved in the consortium) recently announced the availability of its own digital assistant. During its big user conference, OpenWorld, I spoke with the company's VP of AI and Digital Assistant Suhas Uliyar, who offered up a few reasons why the company invested in building a voice layer into its platform in the following clip:

Why Oracle invested in building a voice layer for its platform During OpenWorld, Brent Leary spoke with Oracle's VP of AI and Digital Assistant Suhas Uliyar, who offered up a few reasons why the company invested in building a voice layer into its platform.

And it's this layer that is allowing Hilton, an early adopter of Oracle's Digital Assistant, to offer its employees a more efficient way to get quick answers to questions and also allow them to create better experiences for customers. In the clip below, Kellie Romack, Hilton's VP of Digital HR and Strategic Planning, shared with me why adding voice interfaces to the company's mobile-first employee base can be so important during our conversation at OpenWorld:

Why Hilton was an early adopter of Oracle's Digital Assistant Kellie Romack, Hilton's VP of Digital HR and Strategic Planning, shared why adding voice interfaces to the company's mobile-first employee base can be so important.

Now Amazon does have Alexa Guest Connect. This is cool for when you are staying at a hotel with an Echo device in the room, and you can say, "Alexa, connect my account" and the feature will authenticate the request and allow you to access your stuff like favorite playlists. But voice-first technology can also be at work assisting the employees working to create better customer experiences behind the scenes. Which is also a great opportunity, and Amazon's leadership in the Voice Interoperability Initiative may allow it to integrate Alexa deeply into mission-critical enterprise apps (like CRM/CX, ERP, HCM, SCM, etc) the way Oracle is building its Digital Assistant into its own suite of cloud offerings. As the Hilton example illustrates, organizations are looking to create better employee experiences just like they are looking to do the same for customer experiences, if given the tools to do so safely and securely.

Always something to see

While last week may feel more evolutionary than revolutionary, that's not a slam in any way. For example, Amazon says in the last year, the wake word engine has gotten 50% more accurate.

And while that's not mind-blowing, it's progress. Evolution is an important part of progress. And it's not even slow progress, as improvements are coming fast and furious, relatively speaking. Every announcement can't be Earth-shattering. Every device can't be game-changing. But consistent, regular, noteworthy progress does set the stage for game-changing episodes to occur in due time.