X
Innovation

Running AI workloads is coming to a virtual machine near you, powered by GPUs and Kubernetes

Run:AI offers a virtualization layer for AI, aiming to facilitate AI infrastructure. It's seeing lots of traction and just raised a $75M Series C funding round. Here's how the evolution of the AI landscape has shaped its growth.
Written by George Anadiotis, Contributor
holger-link-724884-unsplash.jpg

Run:AI offers a virtualization layer to run AI workloads on

Photo by Holger Link on Unsplash

Run:AI takes your AI and runs it on the super-fast software stack of the future. That was the headline to our 2019 article on Run:AI, which had then just exited stealth. Although we like to think it remains accurate, Run:AI's unconventional approach has seen rapid growth since.

Run:AI, which touts itself as an "AI orchestration platform", today announced that it has raised $75M in Series C round led by Tiger Global Management and Insight Partners, who led the previous Series B round. The round includes the participation of additional existing investors, TLV Partners and S Capital VC, bringing the total funding raised to date to $118M.

We caught up with Omri Geller, Run:AI CEO and co-founder, to discuss AI chips and infrastructure, Run:AI's progress, and the interplay between them.

Also: H2O.ai brings AI grandmaster-powered NLP to the enterprise

AI Chips are cool, but Nvidia GPUs rule

Run:AI offers a software layer called Atlas to speed up machine learning workload execution, on-premise and in the cloud. Essentially, Atlas functions as a virtual machine for AI workloads: it abstracts and streamlines access to the underlying hardware.

That sounds like an unorthodox solution, considering that conventional wisdom for AI workloads dictates staying as close to the metal as possible to squeeze as much performance out of AI chips as possible. However, some benefits come from having something like Atlas mediate access to the underlying hardware.

In a way, it's an age-old dilemma in IT, playing out once again. In the early days of software development, the dilemma was whether to program using low-level languages such as Assembly or C or higher-level languages such as Java. Low-level access offers better performance, but the flip side is complexity.

A virtualization layer for the hardware used for AI workloads offers the same benefits in terms of abstraction and ease of use, plus others that come from streamlining access to the hardware. For example, the ability to offer analytics on resource utilization or the ability to optimize workloads for deployment on the most appropriate hardware.

However, we have to admit that although Run:AI has made lots of progress since 2019, it did not progress exactly as we thought it might have. Or as Geller himself thought, for that matter. Back in 2019, we saw Run:AI as a way to abstract over many different AI chips.

Initially, Run:AI supported Nvidia GPUs, with the goal being to add support for Google's TPUs as well as other AI chips in subsequent releases. Since then, there has been ample time; however, Run:AI Atlas still only supports Nvidia GPUs. As the platform has evolved in other significant ways, this clearly was a strategic choice.

The reason, as per Geller, is simple: market traction. Nvidia GPUs is by and large what Run:AI clients are still using for their AI workloads. Run:AI itself is seeing lots of traction, with clients such as Wayve and the London Medical Imaging and AI Centre for Value Based Healthcare, across verticals such as finance, automotive, healthcare, and gaming.

Today, there is ample choice beyond Nvidia GPUs for AI workloads. The options range from cloud vendor solutions developed in-house, such as Google's TPUs or AWS' Graviton and Trainium, to independent vendors such as Blaize, Cerebras, GraphCore or SambaNova, Intel's Habana-based instances on AWS, or even using CPUs.

However, Geller's experience from the field is that organizations are not just looking for a cost-efficient way to train and deploy models. They are also looking for a simple way to interact with the hardware, and this is a key reason why Nvidia still dominates. In other words, it's all in the software stack. This is in accordance with what many analysts identify.

However, we were wondering whether the promise of superior performance might lure organizations or whether Nvidia competitors have managed to somehow close the gap in terms of their software stack evolution and adoption.

Geller's experience is that while custom AI chips may attract organizations having workloads with specific performance-oriented profiles, their mainstream adoption remains low. What Run:AI does see, however, is more demand for GPUs that are not Nvidia. Whether it's AMD MI200 or Intel Ponte Vecchio, Geller sees organizations looking to utilize more GPUs in the near future.

Kubernetes for AI

Nvidia's domination is not the only reason why Run:AI's product development has turned out the way it has. Another trend that shaped Run:AI's offering was the rise of Kubernetes. Geller thinks that Kubernetes is one of the most important pieces in building an AI stack, as containers are heavily used in data science -- as well as beyond.

However, Geller went on to add, Kubernetes was not built in order to run high high-performance workloads on AI chips -- it was built to to run services on classic CPUs. Therefore, there are many things that are missing in Kubernetes in order to efficiently run applications using containers.

It took Run:AI a while to identify that. Once they did, however, their decision was to build their software as a plugin for Kubernetes to create what Geller called "Kubernetes for AI". In order to refrain from making vendor-specific choices, Run:AI's Kubernetes architecture remained widely compatible. Geller said the company has partnered with all Kubernetes vendors, and users can use Run:AI regardless of what Kubernetes platform they are using.

Over time, Run:AI has built a notable partner ecosystem, including the likes of Dell, HP Enterprise, Nvidia, NetApp and OpenShift. In addition, the Atlas platform has also evolved both in width and in-depth. Most notably, Run:AI now supports both training and inference workloads. Since inference typically makes for the bulk of operational costs of AI in production, this is really important.

In addition, Run:AI Atlas now integrates with a number of machine learning frameworks, MLOps tools, and public cloud offerings. These include Weights & Biases, TensorFlow, PyTorch, PyCharm, Visual Studio and JupyterHub, as well as Nvidia Triton Inference Server and NGC, Seldon, AirFlow, KubeFlow and MLflow, respectively.

Also: Rendered.ai unveils Platform as a Service for creating synthetic data to train AI models

Even frameworks that are not pre-integrated can be integrated relatively easily, as long as they run in containers on top of Kubernetes, Geller said. As far as cloud platforms go, Run:AI works with all 3 major cloud providers (AWS, Google Cloud and Microsoft Azure), as well as on-premise. Geller noted that hybrid cloud is what they see on customer deployments.

61e95e54543a7c75fc680245-atlas-full-icons-p-800.png

Run:AI sees AI infrastructure as a stack of layers

Run:AI

Even though the reality of the market Run:AI operates in upended some of the initial planning, making the company pursue more operationalization options as opposed to expanding support for more AI chips, that does not mean there have been no advances on the technical front.

Run:AI's main technical achievements go by the names of fractional GPU sharing, thin GPU provisioning, and job swapping. Fractional GPU sharing enables running many containers on a single GPU while keeping each container isolated and without code changes or performance penalties.

What VMware did for CPUs, Run:AI does for GPUs, in a container ecosystem under Kubernetes, without hypervisors, as Geller put it. As for thin provisioning and job swapping, those enable the platform to identify which applications are not using allocated resources at each point in time, and dynamically re-allocate those resources as needed.

Notably, Run:AI was included in the Forrester Wave AI Infrastructure report published in Q4 2021. The company holds a unique position among AI Infrastructure vendors, which includes cloud vendors, Nvidia, and GPU OEMs.

All of them, Geller said, are Run:AI partners, as they represent infrastructure to run applications on. Geller sees this as a stack, with hardware at the bottom layer, an intermediate layer that acts as the interface for data scientists and machine learning engineers, and AI applications on the top layer.

Run:AI is seeing good traction, growing its Annual Recurring Revenue by 9x and staff by 3x in 2021. The company plans to use the investment to further grow its global teams and will also be considering strategic acquisitions as it develops and enhances its platform.

Editorial standards