X
Business

Nvidia, VMware partner to offer virtualized GPUs

The new product enables enterprises to virtualize AI, machine learning and deep learning workloads, either on premise or as part of VMware Cloud on AWS.
Written by Stephanie Condon, Senior Writer

Nvidia and VMware on Monday announced a new software product that lets customers virtualize GPUs, either on premise or as part of VMware Cloud on AWS. The companies say it's the first hybrid cloud offering that lets enterprises use GPUs to accelerate AI, machine learning or deep learning workloads. 

"In a modern data center, organizations are going to be using GPUs to power AI, deep learning, analytics," John Fanelli, VP of product management for Nvidia, told reporters. "And due to the scale of those types of workloads, they're going to be doing some processing on premise in data centers, some processing in clouds and continually iterating between them."

The new offering starts with the enterprise data center product -- Nvidia's new Virtual Compute Server (vComputeServer) software. It's optimized for VMware's vSphere and will be available from all major OEMs, including Cisco, Dell, HPE, Lenovo and Supermicro. 

"We're now able to bring AI to the data center in a way that data center IT admins run and design their data centers -- namely, in a virtualized environment," Fanelli said. 

The vComputeServer was announced at the VMWorld conference in San Francisco, attended by the sort of enterprises that are hiring more and more data scientists to gain more insight from proprietary and public data. They may already be using Nvidia's Rapids, Fanelli said -- a suite of data processing and machine learning libraries that enables GPU-acceleration for data science workflows. 

 vComputeServer provides support for Rapids, and it supports containers. Additionally, it supports GPU sharing, as well as GPU aggregation by a single user for larger jobs. 


Must read


The technology will be licensed on a per-GPU basis. Virtualization typically results in less than a 5 percent performance impact, Fanelli said, though it will depend on the workload. It's a trade-off, he said, that customers should be willing to make for the benefits of virtualization such as manageability and flexibility.

Meanwhile, VMware is planning on making this vComputeServer available on VMware Cloud on AWS, which (as the name suggests) runs VMware's SDDC on the Amazon Web Services cloud. VMware Cloud on AWS allows customers to run any application across public, private or hybrid cloud environments. The service is optimized to run on dedicated, bare metal AWS infrastructure.

"We can enable our customers to deliver rich analytics insight, and ml, AI and other data analytics capabilities on the mass data sets our customers are typically accustomed to having in cloud environments," Ivan Oprencak, director of product marketing for VMware, said to reporters. "Data scientists will be able to take advantage of the cloud environment to scale up their AI, ML and other data analytics applications."

AI is advancing rapidly within the enterprise -- by Gartner's count, more than half of organizations already have at least one AI deployment in operation, and they're planning to substantially accelerate their AI adoption within the next few years. 

While VMware worked specifically with Nvidia on the vComputeServer offering, it's taken other steps to accommodate the growing number of AI workloads. Last month, the company announced its plans to acquire Bitfusion, an Austin, Texas-based company that has created an elastic and virtual OS for hardware accelerators. 

Editorial standards