X
Tech

NVIDIA's VGX: Traction Control for Hosted Virtual Desktops

Hosted Virtual Desktops are a bit like driving in the snow. Every link in the chain between the data on a hard drive in the datacenter and the pixels on the user's screen introduces a delay that the user perceives as lag, and the laws of physics apply.
Written by David Johnson, Contributor

Driving in the snow is an experience normally reserved for those of us denizens of the northern climes who haven't yet figured out how to make a paycheck mixing Mai Tais in the Caymans. Behind the wheel in the snow, everything happens a little slower. Turn the wheel above 30 on the speedo and it could be a second or two before the car responds, and you'll overshoot the turn and take out the neighbor's shrubs.

Hosted Virtual Desktops are a bit like driving in the snow. Every link in the chain between the data on a hard drive in the datacenter and the pixels on the user's screen introduces a delay that the user perceives as lag, and the laws of physics apply. Too much lag or too much snow and it's hard to get anywhere, as citizens of Anchorage, Alaska after this years' record snowfalls, or anyone trying to use a hosted virtual desktop half a world away from the server will testify.

NVIDIA Brings Gaming Know-How to HVD Last week I spent a day with NVIDIA's soft-spoken, enthusiastic CEO, Jensen Huang who put the whole latency issue for VDI into a practical perspective (thanks Jensen). These days, he says, home game consoles run about 100-150 milliseconds from the time a player hits the fire button to the time they see their plasma cannon blast away an opponent on the screen. For comparison, the blink of an eye is 200-400 milliseconds, and the best gamers can react to things they see on screen as fast as 50 milliseconds.

Latency in HVD is a Killer What does video game lag have to do with Hosted Virtual Desktops? Latency in HVD is a killer. Firms I've spoken with in the past several months with thousands of users on Citrix or VMware View report that the end user experience is the single most important factor in the success or failure of their deployments, sometimes spending tens of thousands to shave 50 milliseconds of lag from their HVD infrastructure and quiet the troops. This is where NVIDIA sees opportunity for Graphics Processor Units (GPUs) to earn a place in the corporate datacenter.

The CPU is a Significant Bottleneck Today, the server CPU is a bottleneck in the rendering process on the screen, and as the CPUs are pushed harder as more sessions are packed onto servers, the lag for the user starts to noticeably impact their experience. Why? The CPU is not optimized for the parallel processing required for fast rendering. Of course, the datacenter hardware is not the main limiting factor for HVD performance, the network is, but there is still a lot of room for improvement. Enter the GPU.

GPUs Offload the Rendering GPUs offload the responsibility for the screen rendering process from the CPU and allow the CPU to focus on other activities, like spreadsheet calculations. So why hasn't anyone applied GPUs to HVD workloads to date? To do it is a massive technical challenge, and requires virtualization of the GPU itself, and making a link from the HVD session to the physical GPU hardware.

NVIDIA Creates the First Virtualized GPU NVIDIA's VGX was five years in development. It delivers the necessary pieces on top of the Kepler hardware GPU to cut the CPU out of the rendering process completely. It includes a special video driver for the virtual desktop instance, and components that provide virtual-GPU awareness for Citrix, VMware and Microsoft hypervisors. The result? I saw 100 unique, active, graphics-intensive HVD sessions running on a single 2u rack server. That was a first for me.

100 concurrent VDI sessions in the lab.

100 concurrent VDI sessions in the lab.

What it Means So what? That's what I asked as well. Well, it turns out that CAD rendering and other GPU-intensive activities aren't practical on laptops because of the graphics horsepower required and the sheer size of the databases and files involved. By letting the heavy equipment stay in the datacenter, an engineer can show a customer their new project from a MacBook Air or an iPad instead of hauling around a boat anchor. It's a narrow use case, but illustrates what's now possible.

Of course nothing in the datacenter is going to resolve the issue of high-latency networks in hotels and airplanes and make HVD sessions usable in all situations, just like traction control won't let you make a 90 degree turn at 65 mph in the snow, but it would seem that the GPU hardware could pay for itself in the added HVD session density it allows alone, so it seems to be a win/win scenario. I like it.

A 2U server hosting the 100 VDI sessions. The enclosure on top is temporary space for the Kepler GPUs which server OEMs will consolidate into future server chassis.

A 2U server hosting the 100 VDI sessions. The enclosure on top is temporary space for the Kepler GPUs which server OEMs will consolidate into future server chassis.

Editorial standards