Nvidia reaches out to throw AI at every edge

From cars to TV monitoring, and even phones, Nvidia is determined to put AI into every nook and cranny it finds.
Written by Chris Duckett, Contributor

At last year's GTC conference, Nvidia CEO Jen-Hsun Huang repeatedly said that the company was all in on deep learning, and it was the year that the company unveiled its solution for the datacentre, the DGX-1.

Fast forward to 2017, and the company has grown its datacentre revenue from $143 million in Q1 2016 to $409 million in Q1 2017, and the major public cloud vendors have moved from offering few GPU computing options to all having a GPU product in their portfolio.

In the space of 12 months, the datacentre sector has doubled the proportion of revenue it accounts for; it now sits at a touch over 20 percent of all company revenue, and approaching 40 percent of the revenue brought in by the traditional gaming GPU cashcow.

With its seeming conquest of datacentre machine learning rolling on -- Intel would unsurprisingly disagree -- if Nvidia wants to continue the spread of artificial intelligence, it needs to look at edge devices.

Within the bevy of announcements trotted out at GTC 2017, the company said it is open sourcing the deep-learning accelerator (DLA) found on its Xavier board targeted at self-driving cars.

Xavier ticks the boxes for fitting into edge devices with its low power, and packing enough computational power to get by on AI inference tasks.

In a perfect world for the company, once the DLA is available for all and sundry, it will start to find its way across the computing world into devices and platforms that Nvidia does not concern itself with.

One executive remarked during a press briefing that they fully expect to see AI find its way onto phones in the near future. If such a situation comes to pass, then expect more expensive DGX devices to find their way out the door, and plenty more new Tesla Voltas to boost the datacentre bottom line.

Elsewhere in Huang's keynote on Wednesday morning, the company took the wraps off Nvidia GPU Cloud (NGC), which allows developers to burst out workloads onto the GPUs within a DGX server, or the GPUs found in public cloud providers. NGC uses a software stack stored inside an NVDocker container, with Nvidia promising to fully test and maintain the images offered. The NGC image will also run on desktops that use a 1080 Pascal GPU.

In the automotive sector, it was announced that Toyota has teamed up with Nvidia to use its Drive PX modules for autonomous operations in the Japanese giant's cars. Details on the exact timing for when the results of the partnership will appear were scant, with company spokespeople only reiterating the "next few years" timeline.

The company also unveiled its Isaac robot simulator for training machines in a virtual world that tries to mimic real-world conditions as closely as possible -- the idea being that if robots need to perform dangerous tasks, or developers are looking to cut down the amount of training time, they can have neural networks trained much faster and safer than in the physical world.

Despite the rapid-fire nature of the announcements at GTC, the thrust is clear: Nvidia wants AI to permeate every piece of computing it can.

If Nvidia's datacentre business is able to replicate its performance over the past year and almost triple its revenue again, that would put it within range of overtaking gaming as the company's primary source of revenue.

It's extremely unlikely to happen in the coming year, but nevertheless, Nvidia is making inroads in the datacentre on the back of AI, and it will force itself in deeper if consumer edge devices embrace its AI vision.

Disclosure: Chris Duckett travelled to GTC as a guest of Nvidia

Editorial standards