We've heard for at least a year that Microsoft was working to reposition Cortana from a standalone assistant like Alexa and Google Assistant, to more of an assistance aide. Details on what that meant were scarce until this week when Microsoft started sharing more about its new Cortana strategy.
We had a few clues before Microsoft's Build 2019 developers conference this week. Microsoft execs had told me that they believed they could differentiate Cortana from other personal digital assistants by enabling it to handle more complex, multi-part queries. These kinds of "turn by turn" interactions are happening thanks to work Microsoft has done in its research labs, plus technology Microsoft acquired last year when it bought Semantic Machines.
But there's lots more happening under the covers that Microsoft officials think will make Cortana a smarter and more useful productivity assistant. Andrew Shuman, Microsoft's Vice President of Product for Cortana and Dan Klein, one of the founders of Semantic Machines (who is now a Microsoft Technical Fellow).
Instead of providing single answers to a single simple query, Cortana is going to be powered by "conversational data." Microsoft has been building up its person-centric store of knowledge for Microsoft 365 and Office 365 to provide this data. The team has done this by working closely with the Microsoft Teams team so that Cortana will be able to understand a query like "Call John" and have a very good idea of which "John" a user means because of the information stored in the knowledge base, or "substrate," Shuman told me.
Semantic Machines provides a platform for building conversational experiences. The secret sauce is the ability to process language along with context, so that everything a user says to Cortana is interpreted and not just responded to as a single-skill-type query. Instead of a vendor trying to guess what a user might want to do and building a skill to handle it, Microsoft is focused on training the conversational engine powering Cortana by demonstrating things a user might want to do and then using machine learning and training to connect contexts that may be related (but not an exact match). That's how Microsoft officials think they can turn Cortana into a multi-turn/multi-skill-enabled service.
Microsoft isn't providing any kind of delivery time frame as to when testers might get to try this out. Unsurprisingly, the first kinds of scenarios that Cortana users will see are those focused around work productivity. To make that happen, the substrate -- which is the same one that powers the new unified Microsoft Search experience -- needs to be populated with entities like contacts, emails, calendars, identities and such. This collection of data is then loosely coupled with public search data, such as location and news.
Ultimately, Microsoft is planning to work with partners to add third-party data to the substrate, said Shuman. And as announced at Build, Microsoft is planning to make available its new conversational engine technology to other virtual assistants at some point, as well, through its Bot Framework and Azure services.
If you're wondering (as I was) whether Microsoft's new Cortana focus means its integration work with Amazon's Alexa is on the back burner, Shuman says this isn't the case. Alexa will provide access to different kinds of pools of data, like shopping data, which Microsoft is hoping to access via Cortana-Alexa integration.