Amazon Web Services tackles high performance computing instances

Summary:Amazon Web Services will launch compute clusters for high-performance computing applications like genomic research.

Amazon Web Services on Tuesday will launch compute clusters for high-performance computing applications like genomic research.

The offering, dubbed Cluster Compute Instances for Amazon EC2, is designed for intensive computational workloads like parallel processes used by the likes of Lawrence Berkeley National Laboratory for research.

High-performance computing (HPC) is a competitive space that's closely watched for jaw-dropping throughput and speeds. With its move into HPC, AWS is looking to capture some of the market typically dominated by the likes of HP, Cray and IBM with their supercomputers. AWS reps say that customers have been asking for instances to run complex computing workloads.

Amazon Web Services' (AWS) cluster computing move will either be an interesting showpiece for a limited market or end up democratizing supercomputing. In the meantime, Lawrence Berkeley National Labs was a early tester of AWS' Cluster Compute Instances. AWS' HPC service features pay-as-you-go pricing and the ability to scale up or down on demand. These cluster computer instances operate the same as Amazon's Elastic Compute Cloud.

According to AWS, Cluster Compute Instances will run $1.60 per instance hour. A one-year reserved instance will be $4,290 upfront and carry a usage price of 56 cents an hour per instance. A three-year reserved instance will be $6,590 upfront and 56 cents an hour per instance. In a blog post, AWS said that each Cluster Compute Instance consists of a pair of quad-core Intel "Nehalem" X5570 processors with a total of 33.5 ECU (EC2 Compute Units), 23 GB of RAM, and 1690 GB of local instance storage.

The big difference is speed. AWS' says that applications on its HPC instances can garner 10 times the network throughput of its largest EC2 versions. In a statement, Peter De Santis, general manager of Amazon EC2, said "in our last pre-production test run, we saw an 880 server sub-cluster achieve a network rate of 40.62 TFlop." AWS says it hasn't tried to establish a theoretical peak load, a metric that's watched in the supercomputer rankings, but larger clusters would provide more performance.

AWS said in its blog that its HPC system would rank 146 on the Supercomputer Top 500 list.

We ran the gold-standard High Performance Linpack benchmark on 880 Cluster Compute instances (7040 cores) and measured the overall performance at 41.82 TeraFLOPS using Intel's MPI (Message Passing Interface) and MKL (Math Kernel Library) libraries, along with their compiler suite. This result places us at position 146 on the Top500 list of supercomputers.

Overall, Amazon's cluster computing efforts fit in with its other large-scale efforts such as public data sets and services to popularize Hadoop, a tool to analyze a lot of data.

Related:

Topics: Amazon, Cloud, Hardware, Networking, Servers, Virtualization

About

Larry Dignan is Editor in Chief of ZDNet and SmartPlanet as well as Editorial Director of ZDNet's sister site TechRepublic. He was most recently Executive Editor of News and Blogs at ZDNet. Prior to that he was executive news editor at eWeek and news editor at Baseline. He also served as the East Coast news editor and finance editor at CN... Full Bio

zdnet_core.socialButton.googleLabel Contact Disclosure

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.