Yet perhaps Alexa-lovers should be warned that things may not be as delightful as they seem.
Skills? Oh, Everyone's Got Skills.
New research from concerned academics at Germany's Ruhr-University Bochum, together with equally concerned colleagues from North Carolina State -- and even a researcher who, during the project, joined Google -- may just make Alexa owners wonder about the true meaning of an easy life.
The researchers looked at 90,194 Alexa skills. What they found was a security Emmenthal that would make a mouse wonder whether there was any cheese there at all.
How much would you like to shudder, oh happy Alexa owner?
How about this sentence from Dr. Martin Degeling: "A first problem is that Amazon has partially activated skills automatically since 2017. Previously, users had to agree to the use of each skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place."
So the first problem is that you have no idea where your clever answer comes from whenever you rouse Alexa from her slumber. Or, indeed, how secure your question may have been.
Ready for another quote from the researchers? Here you go: "When a skill is published in the skill store, it also displays the developer's name. We found that developers can register themselves with any company name when creating their developer's account with Amazon. This makes it easy for an attacker to impersonate any well-known manufacturer or service provider."
Please, this is the sort of thing that makes us laugh when big companies get hacked -- and don't tell us for months, or even years.
These researchers actually tested the process for themselves. "In an experiment, we were able to publish skills in the name of a large company. Valuable information from users can be tapped here," they said, modestly.
This finding was bracing, too. Yes, Amazon has a certification process for these skills. But "no restriction is imposed on changing the backend code, which can change anytime after the certification process."
In essence, then, a malicious developer could change the code and begin to hoover up sensitive personal data.
Then, say the researchers, there are the skills developers who publish under a false identity.
Perhaps, though, this all sounds too dramatic. Surely all these skills have privacy policies that govern what they can and can't do.
Naturally, I asked Amazon what it thought of these slightly chilly findings.
An Amazon spokesperson told me: "The security of our devices and services is a top priority. We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers."
It's heartening to know security is a top priority. I fancy getting customers to be amused by as many Alexa skills as possible so that Amazon can collect as much data as possible, might be a higher priority.
Still, the spokesperson added: "We appreciate the work of independent researchers who help bring potential issues to our attention."
Some might translate this as: "Darn it, they're right. But how do you expect us to monitor all these little skills? We're too busy thinking big."
Hey, Alexa. Does Anyone Really Care?
Of course, Amazon believes its monitoring systems work well in identifying true miscreants. Somehow, though, expecting developers to stick to the rules isn't quite the same as making sure they do.
To which one or two parents might mutter: "Uh-huh?"
Ultimately, like so many tech companies, Amazon would prefer you to monitor -- and change -- your own permissions, as that would be very cost-effective for Amazon. But who really has those monitoring skills?
This research, presented last Thursday at the Network and Distributed System Security Symposium, makes for such candidly brutal reading that at least one or two Alexa users might consider what they've been doing. And with whom.
Then again, does the majority really care? Until some unpleasant happenstance occurs, most users just want to have an easy life, amusing themselves by talking to a machine when they could quite easily turn off the lights themselves.