Nvidia has taken the wraps off its next iteration of workstations for data scientists and users interested in machine learning, with a reference design featuring a pair of Quadro RTX GPUs.
Announced at Nvidia GTC on Monday, the dual Quadro RTX 8000 or 6000 GPU design is slated to provide 260 teraflops, and have 96GB of memory available thanks to the use of NVLink.
Signed up to provide the new, beefier workstations are Dell, HP, and Lenovo.
On the server side, the company unveiled its RTX blade server, which can pack 40 GPUs into an 8U space, and is labelled as a RTX Server Pod when combined with 31 other RTX blade servers. All up, the RTX Server has 1,280 GPUs. The storage and networking backbone of the blade servers are provided by Mellanox -- which Nvidia purchased just shy of $7 billion last week.
Speaking during his keynote, CEO Jensen Huang said Pods would be used to support the company's GeForce Now service, to which SoftBank and LG Uplus were announced as members of the GeForce Now Alliance, and its upcoming Omniverse collaboration product that Huang described as Google Docs for movie studios.
For Tesla GPUs, T4 GPUs are being offered by Cisco, Dell EMC, Fujitsu, HPE, and Lenovo in machines that have been certified as Nvidia GPU Cloud-ready -- an award Nvidia launched in November that shows "demonstrated ability to excel in a full range of accelerated workloads", and are able to run containers put together by Nvidia for certain workloads.
"The rapid adoption of T4 on the world's most popular business servers signals the start of a new modern era in enterprise computing -- one in which GPU acceleration has become standard," Nvidia vice president and general manager of Accelerated Computing Ian Buck said.
In the cloud, users of Amazon Web Services (AWS) will soon be able to make use of Nvidia Tesla T4 GPUs with EC2 G4 instances, with general availability slated for the coming weeks, and a preview now open. AWS users will also be able to make use of T4s with Amazon Elastic Container Service for Kubernetes.
The cloud giant already has support for Nvidia Tesla V100 on its P3 instances that can support up to 8 GPUs and 64 Intel Xeon CPUs.
At the same time, Nvidia is repackaging its software stack and libraries to fall under the CUDA-X moniker, including RAPIDS, cuDNN, cuML, and TensorRT.
Finally, Google Cloud ML and Microsoft Azure Machine Learning have integrated RAPIDS, which has been touted as being able to reduce network training times by a factor of 20.
Disclosure: Chris Duckett travelled to GTC in San Jose as a guest of Nvidia
- Nvidia's purchase of Mellanox turns up heat on Intel rivalry, data center ambitions
- Dell EMC, Nvidia make AI reference architecture available
- China's AI scientists teach a neural net to train itself
- CES 2019: Nvidia's new GeForce RTX 2060 is just $349
- NVIDIA's new Turing architecture could make life much easier for video producers (TechRepublic)
- Cheat sheet: TensorFlow, an open source software library for machine learning (TechRepublic)
- Why it could soon be much easier to get your hands on NVIDIA GPUs (TechRepublic)