Brainwave consists of a high-performance distristributed system architecture; a hardware deep-neural network engine running on customizable chips known as field-programmable gate arrays (FPGAs); and a compiler and runtime for deployment of trained models, according to today's Microsoft Research blog post.
If you want a slightly less buzzword and acronym-laden picture of what this looks like, this might help:
As I noted late last month, Microsoft officials were planning to discuss Brainwave at the company's recent Faculty Research Summit in Redmond in July, but changed their minds.
At Hot Chips 2017, Microsoft officials said that using Intel's new Stratix-10 chip, Brainwave achieved sustained performance of 39.5 teraflops without batching. Microsoft's point: Brainwave will enable Azure users to run complex deep-learning models at these kinds of levels of performance.
Microsoft is looking to Brainwave running on hardware microservices as pushing the boundary of the types of AI-influenced services possible to deploy in the cloud, including computer vision, natural-language processing and speech.