Amazon Web Services on Monday launched graphical processing unit instances for its high performance computing workloads.
In a statement and blog post, Amazon said graphical processing unit (GPU) servers have become popular enough to bring to AWS. GPU servers, powered by Nvidia's Tesla chips, are about to hit mainstream as Dell, HP and IBM bring formerly custom servers to market.
AWS said that these GPU servers have generally been out of reach for many companies due to costs and architecture. Now AWS will put these servers on its Elastic Compute Cloud (EC2) service.
Amazon's Cluster GPU Instances allow for 22 GB of memory and 33.5 EC2 Compute Units. The GPU instances tap Amazon's cluster network, which is designed for data intensive applications. Each GPU instance features two NVIDIA Tesla M2050 GPUs.
The key specs:
- A pair of NVIDIA Tesla M2050 "Fermi" GPUs.
- A pair of quad-core Intel "Nehalem" X5570 processors offering 33.5 ECUs (EC2 Compute Units).
- 22 GB of RAM.
- 1690 GB of local instance storage.
- 10 Gbps Ethernet, with the ability to create low latency, full bisection bandwidth HPC clusters.
For Nvidia, the AWS launch is a nice win. GPUs may get a larger footprint in the enterprise. For Amazon, the GPU clusters are a nice way to tap verticals such as the oil and gas industry, graphics and engineering design.
Amazon noted that customers may mix and match the usual instances with the GPU flavor to coax the most performance out of the cloud.
The company has been testing out its GPU instances with a few customers such as Calgary Scientific, a medical imaging software company; BrightScope, a financial data analytics outfit; Elemental Technologies, which provides video processing applications.