X
Innovation

Nvidia's Tesla V100 GPU gets broad backing as server vendors eye AI workloads

Nvidia's Tesla V100 GPU equates to 100 CPUs. That means the speed limit is lifted for AI workloads.
Written by Larry Dignan, Contributor

The major server vendors are lining up behind Nvidia's Tesla V100 GPU accelerator in a move that is expected to make artificial intelligence and machine learning workloads more mainstream.

Dell EMC, HPE, IBM and Supermicro outlined servers on Nvidia's latest GPU accelerators, which are based on the Volta architecture from the graphics chip maker. Nvidia's V100 GPUs have more than 120 teraflops of deep learning performance per GPU. That throughput effectively takes the speed limit off AI workloads.

In a blog post, IBM's Brad McCredie, vice president of the Big Blue's cognitive system development, noted that Nvidia with the V100 as well as its NVLINK PCI-Express 4 and Memory Coherence technology brings "unprecedented internal bandwidth" to AI-optimized systems.

The V100-based systems include:

  • Dell EMC's PowerEdge R740, which supports up to three V100 GPUs for PCIe and two higher end systems with the R740XD, and C4130.
  • HPE's Apollo 65000, which will support up to 8 V100 GPUs for PCIe, and the ProLiant DL380 system supporting up to three V100 GPUs.
  • IBM's Power System with the Power 9 processor will support multiple V100 GPUs. IBM will roll out its Power 9-based systems later this year.
  • Supermicro also has a series of workstations and servers built with the V100.
  • Inspur, Lenovo and Huawei also launched systems based on the V100.

More: NVIDIA morphs from graphics and gaming to AI and deep learning

Editorial standards