X
Innovation

Amazon Web Services wants to run your world

The rapid growth of cloud services like AWS will have a big impact on hardware, in particular on servers and other gear in data centers, but also on how we use PCs and mobile devices. Here are my takeaways from re:Invent.
Written by John Morris, Contributor
AWS-Reinvent

Strictly speaking, Amazon’s annual AWS re:Invent conference, which I attended last week in Las Vegas, isn’t a hardware show. Rather it is about the growing collection of infrastructure and services provided by Amazon Web Services and its partners. But the growth of public cloud services like AWS will have a big impact on hardware, in particular on servers, storage and networking gear in data centers, but also on PCs and mobile devices through Desktop-as-a-Service and the deployment of mobile apps.

Here are some of my takeaways from the conference:

AWS is even bigger than we thought

Amazon won’t say exactly how large AWS is, but it is clearly growing fast — and last week they gave us a few clues. AWS has more than 1 million active customers, the use of its servers (defined as EC2 instance hours per week) is doubling every year and the use of S3 storage is growing even faster, with the number of petabytes transferred per week growing 132 percent year-on-year.

There are 11 AWS Regions worldwide with a total of 28 Availability Zones. Each Availability Zone has one or more data centers, and some have as many as six data centers. Each data center has at least 50,000 servers and "often over 80,000." Even a conservative estimate--an average of two data centers per AZ and the middle of the range for servers (65,000)--comes to 3,640,000 servers.

The amount of server capacity that Amazon adds in a day now would have been sufficient to support its entire IT operations a decade ago when Amazon.com was a $7 billion company. Today Amazon.com is a $74 billion business, most of which is still retail sales, but AWS chief Andy Jassy said AWS could eventually be Amazon’s biggest business. He also said Amazon would spend what it takes to continue to grow the AWS business.

The end of the cloud price wars?

The cost of delivering Infrastructure-as-a-Service declines over time thanks to Moore’s Law. Servers deliver better performance per dollar per watt and drives pack more gigabytes per dollar. Amazon has been passing this along to customers; it has previously said it cut prices on AWS 46 times since it launched in 2006. This has forced Microsoft, Google and IBM (SoftLayer) to respond with their own IaaS price cuts. (Others, like Rackspace, are trying to resist this race to the bottom by offering “managed cloud services”). 

Microsoft’s policy is to match Amazon on pricing, and last spring at its Build conference the company slashed prices on compute by up to 35 percent and storage by up to 65 percent. It followed up in October with cuts to a long list of Azure services. Google announced its latest round of price cuts in November at an event called Google Cloud Platform Live and cut prices of Compute Engine servers by about 10 percent in October. 

The big surprise at AWS re:Invent was that Amazon did not announce any new price cuts of its own. This may be a sign that the price cuts are slowing down. But it also seems like Amazon is increasingly moving beyond infrastructure to building new services — many of them free — that allow customers to get more value out of the current compute and storage.

Infrastructure is still important...

During the conference, Amazon made two big infrastructure announcements. The first was a new compute optimized server instance based on Intel’s 22nm Xeon E5 v3 processor (part of the Haswell family) that Amazon says will deliver the highest level of processor performance on EC2. The chip, the Xeon E5-2666, is customized for Amazon and has a base frequency of 2.9GHz and turbo speeds as high as 3.5GHz.

Diane Bryant, the head of Intel’s Data Center Group, showed up at one of the keynotes, but didn’t provide many details, though judging by the model name it falls somewhere between the 2660 (a 2.6GHz 10-core chip with turbo speeds up to 3.3GHz and a 105-watt TDP) and the 2667 (a 3.2GHz 8-core with turbo speeds up to 3.6GHz and a 135-watt TDP).

Intel is making a big push to customize chips for large customers and specific workloads; in June Bryant said that Intel created 15 custom Xeon processors in 2013 for customers such as eBay and Facebook, and that it would produce twice as many custom chips this year. The EC2 server instance, called C4, comes in five different configurations ranging from two to 36 virtual CPU cores and from 3.75- to 60GB of memory. 

Last month Microsoft announced its G-Series Azure server instances, which are also based on the Xeon E5 v3 Haswell processors. Those instances range from two vCPUs, 28GB of memory and 406GB of solid-state storage to 32 vCPUs, 448GB of memory and 6.5 terabytes of SSD storage.

Amazon also announced larger and faster EBS (Elastic Block Storage) volumes. The General Purpose SSD now supports volumes up to 16TB and up to 10,000 IOPS and 160MBps throughput and the high-performance Provisioned IOS SSDs supports up to 16TB volumes and 20,000 IOPS and 320MBPs.

...but Amazon is moving up the stack

One of the themes of re:Invent 2014 was the “breadth and depth of Amazon services.” The company announced several new products aimed at developers and enterprises—many based on tools that Amazon uses internally to manage its own sites and services.

For developers, the new tools include CodeDeploy, which makes it easier to release code to any number of server instances; CodePipeline to automate and model software releases; and CodeCommit, a revision control service that hosts Git source code repositories.

For enterprises, Amazon announced a Key Management Service, a centralized manager for encryption keys for applications and services running in the cloud and on-premises; AWS Config, a tool that discovers all of your AWS resources, and records their configurations and the relationships between them; and the AWS Service Catalog, which makes it easier for IT operations and end users to provision AWS resources and applications.

One of the more interesting new services is Lambda, which CTO Werner Vogels described as an “event-driven compute service.” What this means is that you write a function (in Node.js) and upload it to AWS Lambda, and when a certain event occurs—a new object is uploaded to an S3 storage bucket, new data streams into Kinesis or a table is updated in DynamoDB—AWS automatically executes that function.

It isn’t that you can’t do this today; the difference is that with Lambda you don’t need to write full applications and provision EC2 server instances to run them. Lambda automatically launches the necessary compute resources, executes the functions, and then shuts them down when no longer needed. Rather than paying for servers, you pay for compute time in units of hundreds of milliseconds and for each request (Vogels mentioned $0.20 per every million requests). There is also a free tier that includes up to 3.2 million seconds of compute time per month and up to 1 million free requests per month to entice customers to try out Lambda.

The container store

This one surprised no one, but the announcement of the EC2 Container Service (ECS) for managing Docker application containers still generated the loudest applause. Containers wrap up applications so that, in theory, you can build and test an app on a laptop, and then shift it among servers both on-premises and in the cloud without provisioning new computing resources or worrying about dependencies.

Docker has only been around for 18 months but it has already passed 50 million downloads and there are 700 "non-Docker employees" contributing code to the Docker Hub. That ecosystem means it is now possible to take virtually any Linux app, wrap it up in a container within seconds, and run it on any server, according to Docker CEO Ben Golub. Support for Docker isn’t new—many customers are already running apps in containers on AWS. Back in March Amazon added support for running Linux applications in Docker containers and followed up a month later with Docker support in Elastic Beanstalk, which automates much of the provisioning, scaling and load-balancing.

What ECS adds is the ability to start, stop, and manage containers across multiple EC2 clusters. Google is generally considered to have an edge here because its own infrastructure is built on containers and it developed Kubernetes, a popular, open-source program for managing Docker containers that is also supported by CoreOS, IBM, Mesosphere, Microsoft, Red Hat and others. Microsoft announced support for Docker containers on Linux virtual machines in June. Last month, the two companies announced a deeper partnership including Windows Server containers, support for the Docker Open Orchestration APIs and integration of Docker Hub on Azure. Canonical, Cloud Foundry, IBM, OpenStack, Rackspace and Red Hat also support Docker containers

Developers love the concept of containers—and many technology companies are already using them--but there still seems to be a lot of debate about how well they will work for other companies in real production environments.

It’s the ecosystem, stupid

While Amazon itself is moving up the stack, blurring the lines between IaaS and PaaS (Platform-as-a-Service), it is also attracting a growing stable of software partners and system integrators that will be critical to the expansion of public cloud services. The conference drew 13,500 people and hundreds of sponsors—many that were also exhibiting at the show. Amazon’s Jassy said the AWS Marketplace now has 1,900 third-party products and customers ran 70 million hours of software from the Marketplace over the past month. During the show Amazon also recognized 28 top consulting partners, who are essentially providing the managed services on top of AWS infrastructure that make it feasible for regular businesses to use it.

One of them, Capgemini, announced an ERP solution specifically for oil & gas companies, called EnergyPath, built on SAP software and AWS infrastructure. Robert Stephens, who heads up Capgemini’s oil and gas business in North America, said that by using AWS it was able to roll out the entire system for its first customer, Excelerate Energy, a liquefied natural gas transportation and regasification provider, in just four months.

Growing enterprise business

No one asks whether AWS is a real business anymore. The company passed that point long ago with numerous technology companies from starts-ups to established players such as Netflix relying on entirely on AWS. The question is how quickly and to what degree will enterprises shift applications or entire data centers from on-premises to AWS.

Amazon and its partners say that the $600 million CIA contract has helped convince businesses that AWS is secure (even though Amazon is building a private version of AWS inside the agency’s data centers). Now enterprise adoption seems to be picking up.

Amazon says it follows a pattern. The first wave is typically development and testing of new apps, and the deployment of entirely new applications and services. The second wave includes Web sites and “digital transformation,” Big Data and analytics, and mobile applications. The third and final wave is business-critical applications.

The phrase “all-in” kept coming up throughout the week. Software vendors (Acquia, Emdeon, IMS Health, Informatica, Pegasystems and Splunk) are putting their entire cloud solutions on AWS. And companies like Conde Nast, Hess, News Corp. and The Weather Company are shifting entire data centers to AWS--typically when leases are up or when they are facing a costly hardware refresh.

Amazon is being realistic about things — execs say that they know certain legacy applications will continue to run in on-premises data centers for years to come — but they still believe that over the long run few organizations will have their own data centers.

Editorial standards