Amazon's engineers are tweaking Alexa's algorithm to help the virtual assistant guess users' requests, and offer to resolve them, before the demand is even uttered.
After being asked, for example, how long a cup of tea should brew for, Alexa will be able to suggest setting a timer for the number of minutes that are recommended.
Alexa engineers Anjishnu Kumar and Anand Rathi explained in a blog post that the improvement is the continuation of efforts to make interactions with the virtual assistant as natural as possible.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
Chatting with Alexa should be as natural as talking to another human being, said the engineers, and enabling the technology to anticipate what's coming next in conversation is key to enable a smooth flow of dialogue.
"Now, we're taking another step towards natural interaction with a capability that lets Alexa infer customers' latent goals – goals that are implicit in customer requests but not directly expressed," wrote Kumar and Rathi.
Achieving this degree of intelligence for a virtual assistant is difficult, and requires a number of sophisticated algorithms. To figure out what the latent goal might be, Alexa has to analyze multiple features in users' requests, and compare them to previous patterns of interaction. The model has to learn from customers' behaviors, remembering for example that users who ask how long tea should brew for often subsequently request a timer to be set up for that amount of time.
No less challenging is the process of creating a follow-up suggestion based on the information that Alexa has identified in the first request. The algorithm has to gather a contextual understanding of the words uttered by the user, in order to carry the information over in a structured way for the next skill to use. Amazon's engineers have developed a so-called "context carryover model" to enable the transition.
One of the hardest tasks was to figure out whether the virtual assistant should second-guess users' intentions at all. "Our early experiments showed that not all dialogue contexts are well suited to latent-goal discovery," said Kumar and Rathi.
"When a customer asked for 'recipes for chicken', for instance, one of our initial prototypes would incorrectly follow up by asking, 'Do you want me to play chicken sounds?'"
SEE: Siri, Alexa and Google Assistant in the spotlight as Europe launches Internet of Things investigation
The engineers used a deep-learning model that accounts for various elements in the dialogue with the customer before deciding whether a suggestion should be triggered or not. The algorithm makes an assessment based on factors ranging from the text of the dialogue to the users' previous behaviors towards the virtual assistant, including how often they engage with Alexa's multi-skill suggestions.
"We are thrilled about this invention as it aids discovery of Alexa's skills and provides increased utility to our customers," said the Amazon engineers.
But although Alexa's engineers insisted that the algorithm only suggests a follow-up when it finds the context suitable, it's easy to imagine how invasive Alexa could become if the technology is flawed. If Alexa gets the context of customers' questions wrong and starts second-guessing irrelevant requests, the technology might become a nuisance to users.
For now, the new capability is available to Alexa customers in English in the US, and requires no additional effort from skill developers to activate.