X
Tech

AMD unveils Radeon Instinct MI60 and MI50 accelerators for AI and HPC

The new accelerators are the world's first 7nm data center GPUs, AMD says.
Written by Stephanie Condon, Senior Writer
radeoninstinctmiflatanglergb5inch.png

AMD on Tuesday unveiled the Radeon Instinct MI60 and MI50, a pair of accelerators designed for next-generation deep learning, HPC, cloud computing and rendering applications. AMD says they are the world's first 7nm data center GPUs.

CNET: Best Black Friday deals 2018 | Best Holiday gifts 2018 | Best TVs to give for the holidays

amdradeonmi60.png

The MI60 delivers up to 7.4 TFLOPS of peak FP64 performance, which should allow scientists and researchers to more efficiently process HPC applications. The use cases, AMD notes, span a range of industries including life sciences, energy, finance, automotive, aerospace and defense. The MI50 delivers up to 6.7 TFLOPS of FP64 peak performance.

The accelerators provide flexible mixed-precision FP16, FP32 and INT4/INT8 capabilities for dynamic workloads, such as training complex neural networks or running inference against those trained networks.

"Legacy GPU architectures limit IT managers from effectively addressing the constantly evolving demands of processing and analyzing huge datasets for modern cloud datacenter workloads," David Wang, SVP of engineering for the Radeon Technologies Group at AMD, said in a statement.

The accelerators feature two Infinity Fabric Links per GPU deliver up to 200 GB/s of peer-to-peer bandwidth, for communications up to 6X faster than PCIe Gen 3 interconnect speeds.. This also enables the connection of up to four GPUs in a hive ring configuration.

They are also the first GPUs capable of supporting next-generation PCIe 4.0 interconnect, AMD says, which is up to 2X faster than other x86 CPU-to-GPU interconnect technologies.

The MI60 provides 32GB of HBM2 (second-generation High-Bandwidth Memory) Error-correcting code (ECC) memory, and the MI50 provides 16GB of HBM2 ECC memory.

The MI60 is expected to ship to data center customers by the end of 2018, while the MI50 is expected to begin shipping by the end of Q1 2019.

AMD also announced a new version of ROCm, the open source, programming language-independent HPC/Hyperscale-class platform for GPU computing. ROCm software version 2.0 supports the new Radeon Instinct accelerators, and it provides updated math libraries for the new optimized deep learning operations (DLOPS). It also offers support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu. Additionally, it supports the latest versions of popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe2) and others.The ROCm 2.0 software platform is expected to be available by the end of 2018.

Analyst Patrick Moorhead, founder of Moor Insights & Strategy, said that with the Radeon Instinct's 7nm design, "AMD moved the ball down the field from a hardware perspective."

TechRepublic: A guide to tech and non-tech holiday gifts to buy online | Photos: Cool gifts for bosses to buy for employees | The do's and don'ts of giving holiday gifts to your coworkers

"I am impressed with its 1TBs memory bandwidth, ganging with EPYC and Infinity Fabric, and density," Moorhead said in a statement. "I believe its degree of success will be directly related to it uptake of ROCM2 software into customer's workflow. AMD Radeon has always had good hardware and it takes hardware, software plus go-to-market to fully move the needle."

Best gifts for co-workers under $50 on Amazon

Black Friday 2018 deals:

Editorial standards