Half of the story of hyperscale is about the construction of vast, new infrastructure where there was none before. The other half is about the elimination of walls and barriers that would make one think the world was smaller than it actually was.
"[If] a workload goes to Amazon, you lose," VMware CEO Pat Gelsinger was quoted as telling his company's Partner Exchange Conference in 2013. "We want to own corporate workload, now and forever." A veteran Intel executive, Gelsinger was summarizing the perceived state of cloud computing five years ago: Amazon was working to relocate the axis around which data centers revolved, to its own public cloud space. VMware perceived itself as the center of that universe, and sought to defend and fortify its own position.
The year 2017, with respect to the data centers and cloud platforms that form the infrastructure of modern IT, was about watching those workloads Gelsinger was afraid of losing forever, adjust their orbits around something else -- but not Amazon.
Last November, we premiered Scale, a series of journeys deep inside the new territories of data centers and cloud platforms where today's applications and functionality are cultivated and housed. It's a world of information technology whose rules are just now being written, and whose leaders and followers are still being determined. Yet it's way too early in this series for us to embark upon a nostalgic retrospective of the times we shared.
If Scale had existed since last January, however, this recurring theme would already have come to light: The axis around which his new, hyperscale realm revolves is not a packaged computer or device, nor any branded software or application. It is the workload, as defined by its users in the enterprise as well as among consumers. If the company that lent its name to this publisher were to produce a slick, 250-page magazine that encapsulated the modern era of IT the way PC Magazine and MacUser did in the 1980s, it would be entitled "Workload."
The "VM" in "VMware" stands for virtual machine (VM). This logical device became the first vehicle for organizations to deploy applications on cloud platforms. VMs gave rise to the demand for portability -- for deploying applications essentially anywhere. But VMware was smart to stake out controlling interests in the means by which VMs were deployed and managed, making vSphere the effective overseer of all inbound roads.
Docker was the first to render portability a commodity for data centers. In 2013, it devised a container mechanism for packaging workloads independently from the bulky operating system that sustained their VMs, so they could be managed by a Linux kernel rather than a hypervisor. It was a revolutionary idea for which a multitude of organizations take credit, but which Docker deserves recognition for actually doing.
The onset of Docker was initially met by enterprises with more skepticism than we tend to acknowledge, even today. Almost immediately, the prevailing presumption was that a new market would consolidate around Docker containers, borrowing the template from the old PC world that software markets formed around formats and compatibility. Naturally, the presumption continued, there would be competing formats, followed by a market shakeout and Docker, or some larger competitor, rising from everyone else's ashes.
But portability is only virtuous when it is ubiquitous. In 2015, Docker Inc. made a bold gamble, donating its entire container format to an open source initiative under the auspices of the Linux Foundation. The gamble behind what's now called the Open Container Initiative (OCI) was this: Docker could have gone ahead and attempted to establish a value-add around the container format, at the expense of portability -- which is the main reason anyone would want to adopt it. Instead, the company would give away the format, giving competitors no clear reason to compete with it, betting its future on the value of the mechanisms that deploy and maintain containerized workloads.
Wasting zero time (as indicated by the wonky dates on the formation announcement), Google drove the formation of a separate foundation: the Cloud Native Computing Foundation (CNCF). It had several of the same members as OCI, but would be devoted to those aspects of containerized workloads where members would seek a competitive edge: deployment and management. Immediately, the CNCF advanced the cause of Kubernetes, the orchestrator that married Google's concept of workload staging (originally entitled "Borg") with Docker containers. Red Hat would completely rebuild its OpenShift PaaS platform around Kubernetes, and CoreOS would shift the focus of its business efforts towards Tectonic, its commercial implementation of Kubernetes.
"What Kubernetes lets you do is declare the state of the data center that you want to exist," explained Brandon Philips, CTO of CoreOS, which produces the Tectonic commercial distribution of Kubernetes, speaking with ZDNet. "Really, everything starts from that user's declaration. I say I want five containers to exist, and then it's really up to the machines in that cluster to go about making that work happen."
The Kubernetes rout(e)
Like falling dominoes, most of the major stakeholders in containerization shifted their deployment and management strategies toward Kubernetes in 2017, including Docker Inc. itself:
In April, managed OpenStack cloud management service provider Mirantis announced the integration of Kubernetes into its Cloud Platform 1.0, promising to move the focus of its staging and provisioning mechanism from an installer-centric model to a workload-centric model.
That same month, Microsoft continued its unprecedented shift toward an open (read: non-proprietary) staging platform for Azure, with the acquisition of Kubernetes-based container deployment platform maker Deis, followed soon afterward by the availability of Deis' technology on Azure. (Microsoft had already hired Kubernetes co-creator Brendan Burns away from Google in June 2016.)
In May at OpenStack Summit in Boston, most of the leaders of the OpenStack community came together to acknowledge Kubernetes as the de facto staging environment for containerized workloads on its private cloud model. Still to be settled was the issue of whether Kubernetes or OpenStack should reside at the lowest layer of the virtual infrastructure, or whether that matter could reasonably be settled on a customer-by-customer basis.
At about the same time, IBM launched Kubernetes support for its Cloud Container Service, promising customers a means to launch Docker containers seamlessly and immediately.
Oracle acknowledged Kubernetes in early June as the heart of its new container staging strategy, taking the unprecedented step (for Oracle) of revealing the new relationship at an open source conference. CoreOS, a competitor to Docker and the producer of a commercial Kubernetes environment called Tectonic since the platform's inception, would contribute its minimized Linux kernel to the partnership, letting Oracle's own Linux be kicked to the curb.
In a rather astonishing move in mid-September, Mesosphere, the maker of a commercial implementation of the Apache Mesos platform called DC/OS (whose proponents had often taken philosophical stands against Kubernetes) announced a way to integrate the two platforms, at least in beta. Mesos' architecture allows for the provisioning of frameworks that enable other environments, including Hadoop and Apache Spark, to be spun up as mostly autonomous subsystems. Without presenting a clear use case for doing so, Mesosphere simply opened the gates for a similar Kubernetes framework to be exploited, citing customer demand.
At last, the first sign that the ball game was being wrapped up: In mid-October, Docker Inc., now under a new CEO, announced it would begin offering Kubernetes as an equivalent staging platform alongside its own Docker Swarm, in its branded releases of the Docker platform. The company's marketing manager did maintain, in an interview with The New Stack, that Swarm could conceivably maintain a presence in a Kubernetes-driven environment, perhaps as a security layer.
In its next act in late October, Microsoft launched its preview of a dedicated Azure Container Service (AKS) where Kubernetes took not only the center stage but the middle letter. Soon afterwards, the company's marketing and Web site gave its Kubernetes platform precedent, shoving its DC/OS-based container staging platform to one side as an alternative.
From a market standpoint, all these developments point to Google having successfully seized control of all roads that lead to effective containerization in the data center. Containers are pointless without portability, and a staging and orchestration environment that is less than ubiquitous may be less than valuable to data center customers.
Microsoft was the first to demonstrate the effectiveness of capturing a market by devaluing its core asset, when it made Internet Explorer a free product, and forced Netscape to compete on the value of its platform's ancillary elements. Docker, displaying some cleverness, tried a similar tactic by opening up its container format and voiding the value of competition at that level.
But Docker had not yet built up its own value-add -- the bigger platform to which the devalued core would connect. Swarm, Docker's cluster-based orchestration platform, was immature. And even though Kubernetes may have been less mature at the time Docker made its move, it had its own compelling design element: the grouping of related containers into pods. Red Hat would help demonstrate that element first with OpenShift, which in turn made Kubernetes more attractive to open source contributors. When Google allowed the Linux Foundation to establish the CNCF almost immediately after Docker's move created the OCI, the fact that their memberships were comprised of mostly the same people, ensured that no design move on the OCI's part would catch the CNCF unaware. It guaranteed Kubernetes' place at the discussion table.
Although Kubernetes does not officially bear Google's trademark, the orchestrator does bear signs not only of its Google heritage but its parenting. The company's plan appears to be to steer the platform's further development in such a way that no containerization initiative can possibly steer around it.
Here's some ongoing developments that are certain to benefit Kubernetes in 2018:
The Container Storage Interface project, which recently received the backing of no less than Dell Technologies (parent company of Dell EMC), promises to provide microservices -- whose lifespans may be very brief -- with persistent connections to databases and storage volumes. Using the CSI interface, any API that opens such a persistent connection would be equally addressable through all three major orchestrators: Kubernetes, Mesos (DC/OS), and Swarm. It's currently stewarded by Moby, which was born from Docker's original open source initiative, but is now steering an independent course for itself. It's to Kubernetes' benefit that the topic of open source data plugins is no longer native to Docker's territory.
A project that interfaces Kubernetes directly to containers, bypassing the Docker Engine completely, is moving forward. Called CRI-O, it leverages Kubernetes' native Container Runtime Interface to enable the orchestrator to instantiate a container through its own native API (an addressable component called its "runtime"). Presently in many data centers, production environments utilize Docker Engine as an intermediary between the orchestrator and runtime; CRI-O gives Kubernetes (specifically) a way to kick Docker out of production environments altogether, relegating it to developers' workbenches only.
A hypervisor-based container environment called Kata will join Intel's Clear Containers project with another platform that competes with Docker, called Hyper. Kata would give data centers the means for both composing and deploying workloads, conceivably kicking Docker out altogether, while at the same time enabling co-existence between container-based workloads and first-generation virtual machines. The OpenStack Foundation is now backing this project, and it will utilize Kubernetes as its principal orchestrator.
There remains a huge challenge for Kubernetes going forward, however. Specifically, it could become so ubiquitous and so standardized that it becomes too difficult for any vendor or open source contributor to create competitive advantage around it.
F5 Networks' principal technical evangelist Lori MacVittie explained the situation this way: We are past the point in the evolution of networks and distributed systems, MacVittie believes, where proprietary methods or multi-protocol options are either viable or marketable. No one other than the organization that composed and packaged its own workload containers can, as VMware's Pat Gelsinger put it, "own the workload now and forever." On the other side of the scale, so to speak, is the danger of enforced conformance -- essentially, building a platform that is so readily commoditized that it becomes impossible for any participating vendor to gain a competitive edge.
"There's really only two options," MacVittie told ZDNet. "You could write a standard that is very well-defined, rigid, and fixed so that everyone has to conform to it in order to be interoperable. And in many cases, that is a very good thing. Or, you leave room for people to do things differently, and destroy interoperability. That's the problem with standards, when you start narrowing it down. There's only one way to build an IP packet today, because we basically narrowed it down to, 'This is how it works,' and if you fail to do that, you're the bad guy, not everybody else who's implemented the standard.
"Because things are changing so much, I think we have to leave it open," MacVittie continued. "We've got at least another five or six years before things settle down, but you've still got so much moving. Locking something down would stifle maturity and forward movement."
The jury may still be out as to whether this move will inevitably benefit Google most of all. Microsoft now has serious stakes in Kubernetes' success. With all of the big three cloud providers squarely in the Kubernetes space, each one will need to discover its own value-add -- that margin that makes its own cloud service more attractive than the others. Google may play its partnerships with Pivotal and VMware, and also with Cisco, as providing its customers with the virtue of choice.
But whenever you're a service provider, your revenue and your success comes from steering your customers along the most profitable course, through the right turnstiles at the right time. When customers have alternate routes (as Apple knows better than anyone else), the market is trickier to control.
For the data center, Kubernetes' sudden prominence means this: In the past, a public cloud-based PaaS platform such as Heroku and the original Windows Azure was only as useful as the resources and languages it supported. With Kubernetes, everything one's platform should support is inside the container, not outside. This helps to equalize the service providers somewhat, since all of them now provide the same interface, if you will, for acquiring and hosting their customers' workloads. It also narrows the room that those providers have to compete with one another on service. Whenever a market becomes commoditized, its survivors are the ones that can compete on price.
Kubernetes may have plastered completely over Docker's message in 2017. But that's no guarantee for Google, or anyone else, in 2018.