Amazon's new GPU-cloud wants to chew through your AI and big data projects

The AWS service is aimed at applications that need vast amounts of parallel compute, like seismic analysis and genomics.
Written by Steve Ranger, Global News Director

Amazon's new service will be used for applications including genomic research.

Image: Getty Images/iStockphoto

Amazon Web Services (AWS) has unveiled a new GPU-powered cloud computing service for artificial intelligence, seismic analysis, molecular modeling, genomics, and other applications that need vast amounts of parallel processing power.

AWS said its P2 instances for Amazon Elastic Compute Cloud (Amazon EC2) are aimed at applications that require "massive parallel floating point performance" .

"These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics, seismic analysis, molecular modeling, genomics, and computational finance workloads," said Jeff Barr, chief evangelist at AWS.

While GPUs were first associated with gaming, they're now finding a new life in dealing with huge computing workloads, as they can be scaled out so that banks of GPUs handle tasks in parallel. This is in contrast to the traditional approach of scaling up, where increasingly complex problems were tackled using individual machines with ever faster CPUs, which is becoming increasingly hard to do.

That said, not every type of workload is suited to being handled by multiple GPUs. By using a GPU-powered service in the cloud, customers can also build compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments, Amazon said.

The instances have got some serious computing firepower: the largest P2 instance offers 16 GPUs with a combined 192GB of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, and up to 64 vCPUs using custom Intel Xeon E5-2686 v4 Broadwell processors.

Matt Garman, vice president of Amazon EC2, said customers needed more GPU performance for workloads like high-performance computing and big data processing. The new P2 instances offer seven times the computational capacity for single-precision floating point calculations and 60 times more for double-precision floating point calculations than Amazon's largest G2 instance, launched two years ago.

AWS customer Altair Engineering said using the service had cut the time it took to run simulations. Stephen Cosgrove, director of computational fluid dynamics at Altair, said: "We're able to leverage the massive amount of aggregate GPU memory and double precision floating point performance in Amazon EC2 P2 instances to fit more simulations into a single node, significantly reduce customer simulation times, and reduce the cost of running large simulations."

P2 instances are available in three instance sizes: p2.16xlarge with 16 GPUs, p2.8xlarge with 8 GPUs, and p2.xlarge with with 1 GPU, and are available in the US East (North Virginia), US West (Oregon), and EU (Ireland) regions.


Editorial standards