Up until very recently, the Amazon AWS cloud was very much its own protected, gated community. While much of the work in modernizing the deployment of applications in the data center was being shouldered by the open source community, especially with Kubernetes, AWS seemed keen to observe the proceedings from a safe distance, from behind its fortified castle walls.
Wednesday's joint announcement from VMware and AWS of two major commitments on data center infrastructure, those walls appear to have come a-tumblin' down.
- First, AWS' ambitious Outposts venture, allowing for Amazon-branded server hardware to host its cloud services on-premises, will also enable management of that hardware through VMware Cloud.
- Second, VMware Cloud Foundation on EC2 will allow existing VMware customer data centers that are deploying containerized apps with Kubernetes, to utilize AWS-based facilities, including hosting and storage.
Why move VMware Cloud from on-premises to AWS, then on-premises again?
At one level, the principal feature of Wednesday's announcement seems somewhat redundant. After all, back in October 2016, the two companies' initial agreement was intended to extend data centers' existing boundaries into AWS public cloud territory, using VMware's NSX network virtualization system as the bridge builder. In other words, it became "VMware Cloud" when resources moved from on-premises to the public cloud.
The difference here isn't obvious at first, but it's very significant. Unlike those customers targeted by the 2016 announcement, other businesses built their data centers in Amazon's cloud to begin with. Outposts gives those customers an opportunity to build new infrastructure resources internally, without abandoning their investment in applications built using AWS' exclusive resources, such as S3 storage and RDS databases. "Internally" in this context also includes the prospects for co-location facilities, such as those managed by firms such as Digital Realty and Equinix. This is actually quite likely for an organization that's acquiring its own infrastructure for the first time.
That said, VMware believes the Outposts partnership will appeal more directly to its long-time vSphere customer base.
"If a customer has standardized on vSphere and VMware in their data centers, and they're running mission-critical workloads," explained Mark Lohmeyer, VMware's senior vice president and general manager for cloud platforms, in a press conference Wednesday morning, "and they've got teams that have built up experience around that. . . the VMware Cloud on AWS Outposts solution is going to be very attractive to that class of customers. It gives them complete compatibility and operational consistency with everything they're already doing elsewhere in their data center. But it gives them the additional benefit that it's delivered by VMware as a cloud service, so it gets them out of some of the pain and complexity of managing the lifecycle of that infrastructure."
Who's responsible for supporting VMware Cloud on AWS Outposts?
Although Wednesday's announcement was a joint one with Amazon AWS, VMware Cloud on AWS Outposts will be managed, delivered, and supported by VMware. The Outposts hardware, and the AWS resources hosted on that hardware, will be managed and supported by AWS.
This is a critically important point for enterprise customers, who typically demand a single point of contact for service. When Red Hat and Microsoft forged an alliance in 2015 making RHEL available on the Azure platform, and in subsequent extensions of that partnership, they made it a point to create a single point of contact for supporting customers hosting Linux and OpenShift applications on Azure. That support group is jointly staffed with personnel from both companies.
By contrast, there appears to be a clear demarcation of where VMware's territory ends and AWS' begins.
What makes VMware Cloud Foundation on EC2 significant on its own?
Elastic Compute Cloud (EC2) is AWS' means of making public cloud resources available to enterprises in packages that adapt to the sizes of what they need to consume. It actually took AWS some time before it was ready to make a Kubernetes environment, based on EC2, generally available, opening up its own EKS service last June.
VMware Cloud Foundation, announced in August 2016, is that company's system for enterprises building their own data center infrastructure using a cloud model. By that, we mean building a platform that enables self-service provisioning, workload portability, and to a reasonable extent, management automation. At its base is NSX, a networking layer that makes all resources addressable in a single pool, regardless of their physical location. In this system, it is NSX that provides the "cloudiness" of the hybrid cloud.
Prior to Cloud Foundation, VMware's plans for automating containerized applications involved changing the format of the container itself. But with Cloud Foundation, VMware became capable of deploying full-scale Kubernetes orchestration using unaltered containers. In such a system, each container has an exclusive address in the network. Through NSX, wherever that container exists, if it has such an address, it's part of the resource pool that Cloud Foundation can manage.
Extending Cloud Foundation into AWS territory (which now includes the public cloud as well as Outposts) makes it feasible for a Kubernetes-orchestrated application based on-premises to expand into the public cloud if and when necessary, and then exit the public cloud when demand subsides. The maintenance and lifecycle management of that application remain centered around vSphere, which many data centers have come to consider their functional home base.
As Lohmeyer told reporters, workloads already engineered for AWS' EC2 platform will become manageable within Cloud Foundation.
"We will also provide tools and APIs that allow third-party data protection, backup, and restore vendors," said Lohmeyer, "to integrate with the storage capabilities that are inherent in EBS [Elastic Block Store] and AWS EC2-based workloads."
As VMware engineers have confided in me over the past three years, this bridging of the gap between AWS' public infrastructure and VMware's hybrid infrastructure has been no less difficult than a moon launch. Indeed, the obstacles to getting this accomplished, from their telling of the story, seem far less political than technical -- if there was a way to do this earlier, it would have been done.
Is there a difference between VMware's container platforms for AWS and Google Cloud?
Much less of a difference than there was before. VMware's partnership with Google, announced in August 2017, involved sister company Pivotal, which is the commercial steward for the open source Cloud Foundry application development platform. Developers in that community had created a deployment mechanism called BOSH that automated, and thus dramatically simplified, the process of deploying applications from little more than source code. They built a Kubernetes-oriented version of that mechanism, called Kubo, which became the basis for Pivotal Container Service (PKS, with the "K" actually standing for "Kubernetes"). The Google Cloud partnership was based around PKS.
Also: AWS re:Invent 2018: A guide for tech and business pros (free PDF) (TechRepublic)
It's Kubo which has been, up to now, the distinguishing factor when comparing all other public cloud-based Kubernetes platforms. But in VMware's press conference Wednesday, Lohmeyer divulged that Cloud Foundation for EC2 will actually enable PKS on Amazon AWS for the first time.
"We will be supporting VMware PKS and VMware Cloud PKS on top of these solutions that we announced today, with AWS," said Lohmeyer. "So we will be able to give that consistent, managed Kubernetes environment for developers, but with that core VMware infrastructure powering underneath."
At no time did VMware ever give developers or reporters the idea that its PKS deal with Google Cloud would be an exclusive one. But Google had portrayed that deal as emerging from an engineering partnership -- one at least as tight as the partnership that led to EC2 on AWS this week. If Google had a case for competitive advantage in the vSphere-based hybrid cloud, it may have just evaporated.
Previous and related coverage:
At re:Invent, the Amazon CTO dismisses databases built on "90's" technology, arguing that cloud native database services form a foundation for innovation.
Amazon Web Services announced Lambda Layers and Lambda Runtime API, allowing 'builders' to bring their own language to Lambda.
AWS re:Invent is taking place in Las Vegas this week. Here's a summary of news from Amazon's data, analytics and AI ecosystem partners.
The new offering expands the partnership AWS has with VMware.
At re:Invent, AWS CEO Andy Jassy introduced personalization and forecasting services for cloud customers that rely on Amazon.com technology but require no machine learning expertise
The AWS DeepRacer is a 1/18th scale radio-controlled, four-wheel drive car, complete with its own racing league.
To address the high cost of inference, AWS at re:Invent introduced Amazon Elastic Inference and a new processor called AWS Inferentia.
Covering both decentralised and centralised trust, Amazon Managed Blockchain supports Ethereum and HyperLedger, while the Amazon Quantum Ledger Database is a fully-managed ledger database.