Announced at Nvidia GTC on Monday, the dual Quadro RTX 8000 or 6000 GPU design is slated to provide 260 teraflops, and have 96GB of memory available thanks to the use of NVLink.
Signed up to provide the new, beefier workstations are Dell, HP, and Lenovo.
On the server side, the company unveiled its RTX blade server, which can pack 40 GPUs into an 8U space, and is labelled as a RTX Server Pod when combined with 31 other RTX blade servers. All up, the RTX Server has 1,280 GPUs. The storage and networking backbone of the blade servers are provided by Mellanox -- which Nvidia purchased just shy of $7 billion last week.
Speaking during his keynote, CEO Jensen Huang said Pods would be used to support the company's GeForce Now service, to which SoftBank and LG Uplus were announced as members of the GeForce Now Alliance, and its upcoming Omniverse collaboration product that Huang described as Google Docs for movie studios.
For Tesla GPUs, T4 GPUs are being offered by Cisco, Dell EMC, Fujitsu, HPE, and Lenovo in machines that have been certified as Nvidia GPU Cloud-ready -- an award Nvidia launched in November that shows "demonstrated ability to excel in a full range of accelerated workloads", and are able to run containers put together by Nvidia for certain workloads.
"The rapid adoption of T4 on the world's most popular business servers signals the start of a new modern era in enterprise computing -- one in which GPU acceleration has become standard," Nvidia vice president and general manager of Accelerated Computing Ian Buck said.
In the cloud, users of Amazon Web Services (AWS) will soon be able to make use of Nvidia Tesla T4 GPUs with EC2 G4 instances, with general availability slated for the coming weeks, and a preview now open. AWS users will also be able to make use of T4s with Amazon Elastic Container Service for Kubernetes.