But there was another part of this year's Build 2019 demonstration that happened so quickly that many (including me) initially may have missed it: Microsoft showed this service working not only on its custom microphone-array reference hardware -- like it did at last year's Build -- but using a cloud-powered virtual microphone array.
"Algorithms for combining speech information at multiple levels yield transcription accuracy that approaches that from close-talking microphones," say the Project Denmark researchers. There's a new project page for Project Denmark on the Microsoft Research site (thanks to WalkingCat for the link), as well as a technical report about Denmark.
"Project Denmark can potentially help our customers more easily transcribe conversations anytime and anywhere using Azure speech services, with or without a dedicated microphone array DDK. Future application scenarios are broad. For example, we may pair up multiple Microsoft Translator applications to help multiple people communicate more effectively using mobile phones to minimize language barriers."
Microsoft announced this week that it will be making the mysterious circular microphone array hardware we first saw at Build 2018 available to those outside the company in the form of device developer kits (which are codenamed "Princeton Tower). Audio-only microphone array DDKs can be purchased from http://ddk.roobo.com for roughly $100. Advanced audio-visual microphone array DDKs are available from Microsoft systems integration partners.
The Speech Devices developer Kit is made for those who want to build devices for custom virtual assistants, conversation transcription and smart speakers. (The Azure Kinect developer kit also can handle conversation transcription, for what it's worth.)