X
Business

Microsoft Build goes gaga for AI: Azure Machine Learning and beyond

Need proof that both AI and data are crucial to Microsoft's success, even eclipsing Windows in importance? This year's developer event delivers.
Written by Andrew Brust, Contributor

For the first couple of years that Microsoft held its Build conference, the event was all about Windows. In the years since, the scope has widened and Build has become the company's broad annual developer confab. At this year's show, being held today through Wednesday in Seattle, there is no shortage of data- and AI-related announcements and demonstrations. If you needed proof that both are crucial to Microsoft's success, even eclipsing Windows in importance, this year's show is it.

Must read: Microsoft's obsession with Windows is ending, and I couldn't be happier (CNET)

On the AI side, there's so much to discuss, it's hard to know where to begin. Luckily, in a private briefing, Microsoft's Matt Winkler helped me understand the AI announcements at a depth that allows me to explain them better to you. Without that briefing, I'd just be regurgitating text from press releases. And that's no fun.

I do so like AML and HAM
I'll start with the part that is perhaps the most complicated to explain, but potentially the most interesting: the announced preview of Azure Machine Learning Hardware Accelerated Models. (I am going to refer this service as AMLHAM - this is not Microsoft's acronym, mind you, and despite its sounding like a brand name for an unhealthy luncheon meat, it's still better than typing the full name out each time.)

CNET: Build 2018: Livestream, start time, what to expect

AMLHAM is the output of an internal project at Microsoft with the nerdy name of Project Brainwave, and it's all based on a hardware technology called Field Programmable Gate Arrays, or FPGAs. Let's take a look at these terms one-at-a-time and see if we can't figure it all out.

Gimme an F, gimme a P
An FPGA is essentially a programmable chip - that is to say, a chip that allows the customer to specify how it should be wired. Since a chip is made up of a huge array of logic gates, and since the programming is done not at the factory, but by the customer (and, therefore, "in the field"), we end up with the name we have.

Because an FPGA is not hard-wired for its particular application at the factory, its manufacture is more generic (and cheaper) than an application-specific integrated circuit (ASIC). But because the algorithm implemented in its custom programming is nonetheless hardware-based, the FPGA can significantly accelerate performance for that algorithm, compared to software-only implementations. And, as it turns out, machine learning algorithms are among those that FPGAs can turbo-charge. And that's how an FPGA-based architecture for deployed ML models leads to a service called Azure Machine Learning Hardware Accelerated Models.

What's the code?
But how do we program the FPGAs in the first place? It turns out that isn't much easier than designing chips, even if it involves higher volume manufacturing of the hardware. But that's where Project Brainwave comes in: it can actually take a deep learning model and "compile" it into the instructions necessary to program the FPGA to implement the model.

Microsoft says that FPGA acceleration of models can actually be a good bit faster than GPU acceleration, so AMLHAM has the potential to create a super-fast AI infrastructure. And, even better than fast is cheap: Microsoft says that FPGA-accelerated models can deliver a 5x better price/performance ratio than would be possible without them.

First things first
AMLHAM won't initially deliver compilation of arbitrary deep learning models onto FPGAs. Instead, it will offer an FPGA-accelerated ResNet50 model, which can be used in a range of image processing applications. But that's just he beginning.

By the way, Google Cloud Platform's Cloud TPUs also enable hardware-accelerated models, but TPUs are specific to Google's TensorFlow deep learning framework and do not work generically across other algorithm libraries, according to Microsoft. AMLHAM, on the other hand, is framework-agnostic.

Your packages have arrived
In addition to the AMLHAM-based ResNet50 model, Microsoft is also rolling out a preview of Azure Machine Learning "packages" for vision, text and forecasting. These packages are not full-blown models with simple APIs the way Azure Cognitive Services are, nor are they raw algorithms, such as those offered in CNTK or TensorFlow. Instead, they are models that can be customized for particular applications.

As an example, consider that a Cognitive Service for vision might operate at a scope where it recognizes people, animals and things, and therefore could scan your photo and tell you if there's a dog in it. But a vision package could be customized to the narrower use case of dogs only, and could then scan a photo and identify the breed of a dog in the photo. The packages are distributed as pip-installable extensions to Azure Machine Learning.

And more
Beyond the announced previews, the Day 1 Keynote is set to demonstrate a number of other AI breakthroughs:

  • Azure Machine Learning/IoT (Internet of Things) integration, showing how machine learning, scoring and inferencing can be done at the edge (on the IoT device), and not just in the cloud
  • A new Azure Machine Learning SDK and Hyperparameter Tuning (which allows machine learning algorithm parameter values to be optimized and set automatically)
  • Deployment of Azure Machine Learning models to Azure Container Instances, Azure Kubernetes Service and Azure Batch AI, for training and scoring
  • A Web-hosted user interface for experimentation management, which will remove the dependency on the standalone Azure ML Workbench application for said functionality
  • Integration of Azure Databricks and Azure Machine Learning, using the new SDK mentioned above - this will allow Spark MLlib-based machine learning models to be trained and deployed into the Azure Machine Learning environment

Cosmic, continued...
Had enough? We're not done yet, because Microsoft made a slew of announcements today around Azure Cosmos DB, the company's cloud-based globally distributed multi-model database.

To me, the two biggest of these announcements are previews of a "Multi-Master" capability, at global scale, and throughput provisioning for sets of containers.

The Multi-Master feature allows writes to be made, and synchronized across regions, with guaranteed consistency. In case you didn't know, that's hard to do. Once this feature reaches general availability, Microsoft will likely use it as part of a campaign to displace a lot of Amazon DynamoDB and Google Cloud Spanner business.

The provisioning feature allows throughput performance to be allocated for a database in aggregate instead of having to do so for each individual table. This will likely make Cosmos DB more affordable for smaller databases where the minimally required throughput per table, when multiplied by the number of tables in the database, made for a greater (and more expensive) aggregate provisioned throughput than was necessary. Being able to provision throughput for a set of containers addresses this issue, and it should help spur greater Cosmos DB adoption, since per-table throughput overhead will no longer make Cosmos DB cost-prohibitive for smaller projects.

Hey, hey, we need some GA
It's not all about previews, though. In addition, Microsoft announced general availability of three new Cosmos DB features:

  • A bulk executor library
  • An async Java SDK
  • A VNET service endpoint

During Microsoft's Q3 earnings call a week ago, Microsoft CEO Satya Nadella stated that Cosmos DB exceeded $100 million in annualized revenue, and did it in less than a year. He also stated he'd "never seen a product that's gotten to this kind of scale this quickly"

That's impressive, but there's still a long way to go. The announcements at Build today should help, quite a lot, as the new provisioning patterns will make the service more economically practical to more organizations and will encourage more tinkering by developers.

AI, AI, go!
AI has a long way to go, too, and the Azure Machine Learning platform is in many ways still immature and incomplete (which is not to say that competing offerings are much better). But today's announcements are rounding things out, and helping to achieve a unification of Azure's Cognitive Services, Machine Learning and analytics technologies. When that unification is complete, adoption rates will be in a position to snowball.

This year, the name "Build" is quite apropos.

Editorial standards