X
Innovation

AI startup Petuum aims to industrialize machine learning

Pittsburg-based Petuum, backed by SoftBank, has developed novel tools for parallelizing machine learning operations across computers. The software could help break the bottlenecks IT encounters in scaling up AI across industries.
Written by Tiernan Ray, Senior Contributing Writer

The past thirty years of machine learning breakthroughs are intimately entwined with a big idea in computing: parallel distributed processing, where a parts of a program run simultaneously on multiple processors to speed computation.

One AI researcher-turned-entrepreneur believes the field needs a lot more savvy about parallelism, to make parallelizing AI dead simple.

Eric Xing, a Carnegie Mellon professor of machine learning, three years ago founded Petuum, based in Pittsburg, which has received $108 million in funding from Japanese conglomerate SoftBank, along with Advantech Capital, Chinese computing giant Tencent, Northern Light Venture Capital, and Oriza Ventures.

Also: Can IBM possibly tame AI for enterprises?

The company plans to ship the first version of its AI platform software next summer, an offering Xing hopes will "industrialize" machine learning, thereby making it more reliable and more broadly available.

eric-xing-headshot.jpg

Petuum founder and CEO Eric Xing came up with the idea for the AI software while on sabbatical from Carnegie Mellon at Facebook in 2010.

Much of the challenge of AI is a systems engineering challenge, and at the heart of that is a problem of parallelizing the running of algorithms across all kinds of configurations of machines.

"When you deploy algorithms, you need to maintain it, you need to update it, change it," Xing told ZDNet.

"That is the very bottleneck of getting AI accessible," he says, "for companies that aren't Google or Microsoft, that don't have armies of engineers, for traditional IT teams.

"There is a shortage of talent, and there is little to no history of building AI teams within most companies."

petuum-data-and-model-parallelism.png

Petuum is trying to make it easier to achieve either data or model parallelism, or both, across many computers.

"Other companies want Lego pieces, they want building blocks of machine learning solutions. AI needs to be industrialized, and there need to be standards - we want to be the front-runners of such a culture."

Also: Google says 'exponential' growth of AI is changing nature of compute

The platform, which is laid out extensively in a 2015 paper by Xing and colleagues published in IEEE Transactions on Big Data, describes a way to automatically break programs apart in two different ways.

One is "data parallelism." That is, of course, a very popular approach already in AI. Training, and in some cases inference, in machine learning, is sped up by sending different pieces of data to different processors, either CPUs or, more commonly, GPUs. Each processor trains the neural network using its portion of the total data set, and the parameters of the network, the weights, are updated across all those slices of data.

Another approach, less common, and more difficult to engineer, is to the network into pieces across processors, known as "model parallelism." These problems of parallelism have been a focus of computer science for decades. For machine learning programs written in Google's TensorFlow, or the popular Caffe framework, Petuum's software can automatically achieve either data or model parallelism, or a combination of both.

Also: Intel-backed startup Paperspace reinvents dev tools for an AI, cloud era

The key insight of that work is that machine learning, unlike other programs, is not "deterministic," it is probabilistic. As such, it has three advantages other kinds of software doesn't have in the terms of parallelism: It can tolerate error in individual parts of the program's function to a greater degree; the dependencies between parts of the program change in the course of running the program, they are dynamic; and different parts "converge" at a solution to the given problem at different rates.

The Petuum software has developed several tricks to exploit those strengths. For example, a "parameter server" runs a scheduling protocol that chooses which parameters of the neural network to run in parallel, based on which parameters of the neural network are only "weakly" correlated with one another, and therefore can be affected independently.

The results are a little reminiscent of the MapReduce big data framework, but Petuum argues its system has numerous advantages over MapReduce and other parallelizing infrastructure, such as Spark and GraphLab.

petuum-neural-network-scheduling-system.png

Petuum's parameter server makes decisions such as how to schedule work on different segments of a neural network based on dependencies between parameters.

(Much more documentation on the technology is available on the company's site.)

Xing had the epiphany that started the company while taking a sabbatical from Carnegie Mellon at Facebook in 2010.

"I was embarrassed at my own inability to deliver my models rapidly," he recalls. "I went back to CMU, and we started a research project on how to take a piece of existing machine learning code, and automatically make a parallel version for the data center."

Petuum is still developing how it will monetize the platform. Xing says it could include a licensing model that charges by the number of machines or users a client has working on a given AI system. But, in the meantime, Petuum is in the process of shipping some packaged software for vertical industries. The idea is to prove that "we are able to address non-trivial AI problems," he says. But it is also the beginning of what Xing hopes will be a marketplace of vertical solutions that can come from numerous parties - Lego bricks for industries.

Also: Fast.ai's software could radically democratize AI

One industry that is an early customer is healthcare. Hospitals are especially interesting to Xing, because they likely may not have a dedicated AI team, and even if they do, their IT team would perhaps be challenged by the need to deploy AI models on a range of hardware, from single laptops on up to cloud infrastructure of numerous application containers.

"Where they have an IT team, they may sit in front of a UI and update the algorithms, but running on Petuum, they don't need to worry about how the data is distributed or run on different machines."

A first product of the healthcare effort is a system for automatically generating human-readable reports for doctors using data such as radiology scans, processed via reinforcement learning.

"This is not about classification," says Xing. "It is about summarizing knowledge into a one-pager, with a deeper understanding of medial information."

petuum-system-architecture.png

The Petuum software system architecture.

"You can increase diagnostic outcomes, you can speed up a doctor's work."

One outcome is the company's partnership, announced in September, with the Cleveland Clinic, to produce an Artificial Intelligence Diagnosis Engine (AIDE) that can "apply advanced machine learning algorithms to medical record data." The partnership is competing for IBM's "Watson AI Xprize."

Also: Facebook enlists AI to tweak web server performance

Of course, industrializing AI leaves open the question of whether such work will get closer to the Holy Grail of "artificial general intelligence."

Xing thinks the word "generalizability" has "been abused or overloaded."

"I don't think there is a single algorithm that can solve general AI," he says, "to process speech and also read pictures, that's impossible - that's not even a scientifically viable statement."

"But if you talk about from an engineering sense, about nuts and bolts that can be used in different places, then we can make these different building blocks that can be reused."

"The bigger problem," he says, "is a disconnect between scientists and engineers: the two are not giving insights to one another."


"A lot more needs to happen to bridge the gap. I don't believe that just inventing fancier and fancier models is the way to go. You still need engineers to translate the models into product."

Petuum has eight papers that have been accepted at the forthcoming NeurIPS conference on machine learning, which takes place next month in Montreal.

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:

Editorial standards