X
Tech

Alexa and Google Home devices leveraged to phish and eavesdrop on users, again

Exclusive: Amazon, Google fail to address security loopholes in Alexa and Home devices more than a year after first reports.
Written by Catalin Cimpanu, Contributor

Hackers can abuse Amazon Alexa and Google Home smart assistants to eavesdrop on user conversations without users' knowledge, or trick users into handing over sensitive information.

The attacks aren't technically new. Security researchers have previously found similar phishing and eavesdropping vectors impacting Amazon Alexa in April 2018; Alexa and Google Home devices in May 2018; and again Alexa devices in August 2018.

Both Amazon and Google have deployed countermeasures every time, yet newer ways to exploit smart assistants have continued to surface.

The latest ones were disclosed today, after being identified earlier this year by Luise Frerichs and Fabian Bräunlein, two security researchers at Security Research Labs (SRLabs), who shared their findings with ZDNet last week.

Both the phishing and eavesdropping vectors are exploitable via the backend that Amazon and Google provide to developers of Alexa or Google Home custom apps.

These backends provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.

The SRLabs team discovered that by adding the "�. " (U+D801, dot, space) character sequence to various locations inside the backend of a normal Alexa/Google Home app, they could induce long periods of silence during which the assistant remains active.

Phishing personal data

The two demos embedded below show how an attacker could carry out a phishing attack on both devices.

The idea is to tell the user that an app has failed, insert the "�. " to induce a long pause, and then prompt the user with the phishing message after a few minutes, tricking the target into believing the phishing message has nothing to do with the previous app with which they just interacted.

For example, in the videos below, a horoscope app triggers an error, but then remains active, and eventually asks the user for their Amazon/Google password while faking an update message from Amazon/Google itself.

Notice in the first video how Alexa's blue status light remains active and never shuts off, a clear indicator that the previous app is still active and busy interpreting a long seriesof "�. " character sequences.

Eavesdropping on unsuspecting users

The "�. " can also be used in a similar fashion for eavesdropping attacks. However, this time, the character sequence is used after the malicious app has responded to a user's command.

The character sequence is used to keep the device active and record a user's conversation, which is recorded in logs, and sent to an attacker's server for processing.

Both of these attacks exploit the fact that while Amazon and Google verify and vet Alexa and Google Home apps when they are submitted, they do not do the same for subsequent app updates.

In an email to ZDNet, the SRLabs team said they reported the issue to both vendors earlier this year, yet the companies have failed to address the issue.

"Finding and banning unexpected behavior such as long pauses should be relatively straight-forward," the SRLabs team told ZDNet. "We are surprised that this hasn't happened since reporting the vulnerabilities several months ago."

Both Amazon and Google told ZDNet via email that they've took corrective actions following SRLab's report earlier this year, although they did not provide this information before this article's publication.

"All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future."

Google also wanted Home assistant owners to know that their device will never ask them for the account password, and that Google staff are currently reviewing actions from all third-party apps.

Amazon said the same thing -- that its devices would never ask for a user's password -- and that they "put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified."

We'll see in the future if this works. Some researchers have already reached out to ZDNet after this article's publication with new methods to add malicious actions to both devices.

Article updated with comment from Google at 3:00pm ET, October 20, and Amazon at 11:00am ET, October 21.

Top tips to keep safe around your voice assistant

Editorial standards