Nvidia looking to surf data science wave into the data center

As enterprises shift to distributed forms of computing, Nvidia hopes to slide into the datacentre.
Written by Chris Duckett, Contributor on

A wall of Nvidia blade servers

(Image: Nvidia)

One of the themes to surface from a week immersed in GTC is the belief that data science is the hottest ticket in town, and to many in attendance, Nvidia is driving the bus.

The company that started out making graphics chips, and to uninformed observers would have appeared to be making a weird deviation in the world of artificial intelligence, is now bringing it all back together as graphics and AI merge.

The state of the art is at such a point that the company believes it can claim to create photo-realistic images from sketches, which, besides a few boundary issues with different elements, looked realistic enough.

Inferring by AI is also being used to add ray tracing into Quake II.

"We are doing a lot of work in AI-inferred image generation. It is unquestionably the future," Nvidia CEO Jensen Huang told ZDNet on Tuesday.

But focusing merely on the pretty consumer side of the business is to ignore its datacentre ambitions that should leave executives at companies like Intel sweating bullets. Nvidia believes that data science and the use of neural networks needs to make use of the massively parallel hardware it offers, and that the days of being able to get away with CPU-run neural networks are fading fast.

"My belief is that a lot of the inference, even still today, is still being run on CPUs -- I think there is some offline batch inference on Volta or legacy Pascal, but I think the vast majority of inference is running on CPUs," Nvidia general manager and vice president of accelerated computing Ian Buck told journalists on Wednesday.

"What we are seeing now is that the networks that people are deploying can no longer run on CPUs."

Using the example of a voice search to a search engine, Buck pointed out the different networks handling that request: A denoiser, an acoustic model, and a language model to process the input voice; another network to scan and process the web pages returned to the user; and a third to return a vocalisation of the result.

"That can't run on a CPU -- it simply isn't possible. If you tried to do that in real-time, it would take seconds, if not unusable. They have to execute all that in milliseconds," Buck said.

"So that's the shift that we are seeing, the networks are not able to run on the CPUs to get the accuracy or latency requirements."

Also: Nvidia's purchase of Mellanox turns up heat on Intel rivalry, data center ambitions

During the week, the company announced that Tesla T4 GPUs are being offered in Nvidia-certified servers from Cisco, Dell EMC, Fujitsu, HPE, and Lenovo. But for Buck, the addition of a GPU into a standard enterprise server is merely the first step into the larger world of distributed computing, and to get its potential customers there, the data scientist is key.

"In the end, that's the person we need to help, the data scientist who is obviously under enormous pressure to drink from this firehose of data that they have been collecting, and actually making business improvements," Buck told ZDNet.

"I think distributed data analytics and data science is the big next chapter for the enterprise, and one technical barrier might be turning the corner on the networking, I'm starting to see that with 25G and 100G, so I think as people see what they can do, I think it'll definitely catch on and move quickly."

Nvidia's $6.9 billion purchase of Mellanox should signal how seriously the company is taking its push into the datacentre. It already had its own compute stack, and should the purchase gain approval, will add networking and interconnects to its bag.

When discussing the purchase, Huang pointed to the increase of east-west traffic in the datacentre due to technologies like containers and neural networks, as well as the size of data being analysed.

"Both of these conditions cause the network to be the bottleneck, both conditions, and during that time when Moore's law is slowing down, the software stack, the networking software stack, has to be moved onto the fabric as much as possible," the Nvidia CEO said.

"The CPU is now too rare a resource, so you have to offload any work that you can, and Mellanox is world class at CPU offloading, they take the entire stack of networking and they run it on the smart NIC.

"In the future, more and more and more of that will happen, so the network is going to become intelligent."

According to Huang, the computing fabric will extend beyond the node and into the network.

"The whole thing is going to be one large computer," he said.

At the same time as it is trying to get enterprises at the bottom end of the market onto GPUs, Nvidia is offering 1,280 GPU RTX Server Pods. The sort of machine that is being pitched at telcos to allow them to offer services like GeForce Now. SoftBank in Japan and Korea's LG Uplus are already signed up for Pods.

From above and below, Nvidia is determined to find a way into the datacentre.

Disclosure: Chris Duckett travelled to GTC in San Jose as a guest of Nvidia

More from GTC 2019

Congressional AI caucus co-chair calls for five-year AI plan

Congressman Jerry McNerney wants America to develop a long-term plan to resist advances made by China, India, and Russia.

The future of graphics is unquestionably AI image generation: Jensen Huang

Nvidia CEO tells ZDNet that a new take on Quake II points the way to the future.

Pascal GeForce GTX cards could get ray tracing with driver update

Superseded architecture to get basic capabilities of beefier stablemate.

Nvidia goes Nano for latest Jetson release

The GPU giant has released a low-powered system for AI tasks.

Nvidia GauGAN takes rough sketches and creates 'photo-realistic' landscape images

Using segmentation maps and a new deep-learning model, GauGAN can create fairly realistic images.

Nvidia unwraps RTX and T4-based hardware and cloud instances

GPU giant unveils new CUDA-X label for its software stack.

Editorial standards