X
Innovation

AWS says its new monster GPU array is 'most powerful' in the cloud

AWS has released its most powerful and expensive instance aimed at AI developers.
Written by Liam Tung, Contributing Writer

Amazon Web Services (AWS) has launched new P3 instances on its EC2 cloud computing service which are powered by Nvidia's Tesla Volta architecture V100 GPUs and promise to dramatically speed up the training of machine learning models.

The P3 instances are designed to handle compute-intensive machine learning, deep learning, computational fluid dynamics, computational finance, seismic analysis, molecular modelling, and genomics workloads. Amazon said the new services could reduce the training time for sophisticated deep learning models from days to hours. These are the first instances to include Nvidia Tesla V100 GPUs, and AWS said its P3 instances are "the most powerful GPU instances available in the cloud".

The new P3 instances are available with one, four, or eight Tesla V100 GPUs and eight, 32, or 64 custom CPUs based on Intel Xeon E5 Broadwell processors.

The new instances are available in the US East (N. Virginia), US West (Oregon), EU West (Ireland), and Asia Pacific (Tokyo) regions, and will be coming to other markets in the future, according to AWS.

Each of the P3's GPUs has 5,120 CUDA cores and 640 Tensor cores, the latter being key to accelerating the training of deep neural networks. The p3.16xlarge instance, for example, can do 125 trillion single-precision floating point multiplications per second. AWS spokesman Jeff Barr said this instance is 781,000 times faster than the 1976-made Cray-1.

art-hybrid-cloud-intro-2017.jpg
Image: Getty Images/iStockphoto

AWS is also releasing a new set of deep learning Amazon Machine Images (AMI), which include frameworks like Google's TensorFlow, and other tools for building AI systems on AWS, such as version 9 of Nvidia's CUDA toolkit, which adds support for the Volta architecture.

All this extra power does come at a cost. In Tokyo, for example, the on-demand rate for the p3.16xlarge is $41.94 per hour compared to the corresponding P2 GPU instance's rate of $24.67. In the North Virginia region the top end P3 costs $24.48 per hour, while in Ireland it's $26.44 per hour. The P3 instances are, however, also available under spot pricing and reserved instances.

Nvidia today also announced the GPU Cloud for AI developers, which is available for users of the AWS P3 instance. Similar to AWS' AMI, the service offers developers AI frameworks, such as Caffe, Caffe 2, Microsoft's Cognitive Toolkit, TensorFlow, as well as CUDA and other tools to take advantage of the faster P3 instances.

Barr notes that the newest AMIs include the latest versions of Apache MxNet, Caffe2, and TensorFlow, which currently do the Telsa V100 GPUs. It will add further updates once Microsoft Cognitive Toolkit and PyTorch add support for Tesla V100 GPUs.

Related coverage

AWS, Microsoft launch deep learning interface Gluon

The interface gives developers a place where they can prototype, build, train, and deploy machine learning models for cloud and mobile apps.

AWS announces per-second billing for EC2 instances

The new, more granular billing for compute resources will be introduced next month for Linux instances in all AWS regions.

Read more on AWS

Editorial standards