X
Innovation

OctoML scores $28M to go to market with open source Apache TVM, a de facto standard for MLOps

The open source Apache TVM project is becoming a de facto standard in MLOps, and OctoML is gearing its commercialization and scale up
Written by George Anadiotis, Contributor

MLOps is the art and science of taking machine learning models from the data science lab to production. It's been a hot topic for the last couple of years, and good reason. Going from innovation to scalability and repeatability are the hallmarks of generating business value, and MLOps represents precisely that for machine learning.

Apache TVM is a key open source project in MLOps, used by the likes of Amazon, AMD, ARM, Facebook, Intel, Microsoft, and Qualcomm. OctoML is the company set up by founding members of the TVM project to commercialize and scale it up.

OctoML today announced it has raised a $28 million Series B funding round, bringing the company's total amount raised to $47 million. Addition led the round with participation from existing investors Madrona Venture Group and Amplify Partners.

ZDNet connected with Luis Ceze, CEO and co-founder of OctoML, to discuss the past, present, and future of TVM, OctoML, and MLOps.

TVM - A virtual machine for machine learning

TVM stands for Tensor Virtual Machine and it started as a research project at the University of Washington about five years ago. The vision was to bridge the gap between a growing set of machine learning models, a growing set of machine learning frameworks, and a growing set of hardware targets.

Hardware vendors like Nvidia or Intel each have their own specific libraries and frameworks and software stack, and they also need to address frameworks such as TensorFlow or PyTorch. This creates a combinatorial explosion, and it's hard for everyone to keep up.

TVM set out to create a clean abstraction on top of different hardware targets. The idea was that data scientists should be able to use the framework to express their models and get them running where they need them to run without doing a whole lot of manual work and a whole lot of manual optimization.

Despite all the progress in creating models, it still takes quite a bit of work to get a model ready for production because you need to hit performance targets, Ceze noted. And that's the reason why TVM got quite a bit of traction over the last five years. TVM does not just offer portability -- it also offers good performance, Ceze went on to add.

TVM essentially is a compiler and a runtime system. It takes machine learning models as inputs and produces executables highly optimized for the target platform, without the need to involve a bunch of software engineers for months.

tvm.png

Apache TVM uses machine learning to optimize machine learning models

The way TVM does this is a bit of an inception scheme, as it uses machine learning to optimize machine learning models. TVM explores different options for deployment in the combinatorial space that the juxtaposition of all options represents, and tries to identify the most performant one for the environment at hand.

This sounds like a good idea, and TVM was successful in capitalizing on it. At this point, Ceze said, platform enablement is a key part of what TVM does. Hardware vendors want to enable their hardware to work well with TVM. Then cloud vendors for example work with TVM since it supports the hardware they have, and machine learning framework creators also support it.

It's a virtuous circle of sorts, and Ceze noted that besides validating TVM's key ideas, this goes to show the power of open-source ecosystems:

"Models are evolving very fast. Frameworks are evolving fast, the hardware is evolving fast. No single entity can provide a solid vertical and be able to do that in a reasonable way. With TVM, given its open-source, people contribute from all sorts of angles. I'm very pleased with how the community is mobilized around this, and the community keeps growing because this makes it future proof."

OctoML - commercializing TVM

In December 2020, TVM's community got about 1000 people together, Ceze said. Community growth, as well as commercial adoption, was what led Ceze and his co-founders to start OctoML in 2019. The founding team includes former University of Washington researchers who got their PhDs working on TVM, plus Jason Knight, former Head of Software Product at Intel.

OctoML built a product called Octomizer, which is essentially running TVM as a service. There's a number of reasons why people use Octomizer rather than TVM, Ceze said. The first one is ease of use -- there's nothing to install or configure, and that's a big win.

That also applies to hardware, meaning that when targeting a variety of hardware for deployment, a test environment has to be created for this, too. This is ready to go if you use the Octomizer. Last but not least, precisely because TVM uses machine learning for machine learning, TVM needs data to work well, and OctoML has it.

Ceze said the Octomizer has really good traction (about 1.000 signups which OctoML is working on onboarding), and that was actually the reason why they decided to raise a Series B. OctoML still has funds from their Series A in the bank, he went on to add, but the momentum was too good to pass up on.

octopus-yellow2.jpg

OctoML commercializes Apache TVM, and seems to have found a sweet spot and created an open source ecosystem for MLOps

Besides growing the company from its current headcount of about 45 people to a projected 70 by the end of the year, OctoML has more plans for TVM. In terms of hardware, the goal is to bring TVM's support for Raspberry Pi to the Octomizer, as this is going to enable many use cases that involve deployment of AI models on the edge, typically IoT.

Another direction for the development of TVM is support for training machine learning models, beyond inference which is already supported. Although training machine learning models is quite well served, it's still very computationally intensive. The goal is to use TVM's magic to optimize the computational cost of training workloads.

Although the machine learning used by TVM for inference won't be that different in terms of data types and algebra from that used for training, everything above it will be, noted Ceze. The Octomizer is a full automation system for benchmarking, packaging and optimization, to which continuous integration / continuous deployment (CI/CD) capabilities will be added.

When discussing the use of open source TVM versus the Octomizer, Ceze noted that while it's entirely possible to use the fully open source version of TVM, and many organizations do that, it's a complicated platform. Lots of knowledge, as well as data, on TVM's usage is distilled in the Octomizer, he went on to add.

TVM seems to have hit a sweet spot, addressing a previously unmet need in the MLOps space. OctoML on its part seems to be executing on a well-founded open source commercialization strategy, leveraging community plus a software-as-a-service offering, with a twist of hardware thrown in. TVM looks like a de facto standard that's here to stay.

Editorial standards