On May 7 at Build 2018, Microsoft officials are making a number of new edge announcements. The company is open sourcing the Azure IoT Edge Runtime to allow customers to modify and debug more easily.
Azure IoT Edge is Microsoft's cloud service that is designed to give users insights from the rapidly growing amount of data collected by simple sensors and computers situated at the edge of networks -- without having to send that data for central processing. The IoT Edge runtime is the piece that runs on each IoT Edge device and manages the modules deployed to each device.
Microsoft is enabling its Custom Vision cognitive service to run on Azure IoT Edge so that devices like drones and industrial equipment can perform vision-related activities even when not connected to the cloud. Microsoft execs said this is the first of its handful of Cognitive Services that will work on edge devices, with others coming "over the next several months."
Drone vendor DJI is working with Microsoft to build a software development kit for Windows 10 PCs. The kit, according to Microsoft will allow users to have full flight control and real-time data transfer to Windows 10 devices. The two companies also will work together on additional services that use Azure IoT Edge and Microsoft AI services for customers in the agriculture, construction, public safety and other vertical markets.
Microsoft also announced at Build a partnership with Qualcomm Technologies to create a vision AI developer kit that runs Azure IoT Edge. This kit will provide hardware and software required to develop camera-based IoT products, Microsoft officials said, and will give developers a way to build products using Azure Machine Learning services and take advantage of hardware acceleration using the Qualcom Vision Intelligence Platform and Qualcomm AI Engine. Services can be downloaded from the cloud and run locally on edge devices by using these products.
Microsoft is targeting makers of in-car and/or in-home assistants, smart speakers and other voice-enabled devices with a new Speech Devices software development kit (SDK) that is aimed at providing audio processing from multi-channel sources for more accurate speech recognition, far-field voice, noise cancellation and more.
On the futures front, Microsoft also is taking the wraps off "Project Kinect for Azure," which officials described as "a package of sensors from Microsoft that contains our unmatched time of flight depth camera, with onboard compute, in a small power-efficient form factor." I asked if the "next-generation depth camera" in this package is the same camera that will be in the next version of the HoloLens, but officials declined to say.
A spokesperson did tell me that "the Microsoft hardware implementing these sensors will be available in 2019. To learn more and to sign up so you can be one of the first developers to use Project Kinect for Azure, go to "="" class="c-regularLink" target="_blank" rel="noopener noreferrer nofollow">http://www.azure.com/."