Video: How Docker brought containers mainstream
Half of the story of hyperscale is about the construction of vast, new infrastructure where there was none before. The other half is about the elimination of walls and barriers that would make one think the world was smaller than it actually was.
"[If] a workload goes to Amazon, you lose," VMware CEO Pat Gelsinger was quoted as telling his company's Partner Exchange Conference in 2013. "We want to own corporate workload, now and forever." A veteran Intel executive, Gelsinger was summarizing the perceived state of cloud computing five years ago: Amazon was working to relocate the axis around which data centers revolved, to its own public cloud space. VMware perceived itself as the center of that universe, and sought to defend and fortify its own position.
The year 2017, with respect to the data centers and cloud platforms that form the infrastructure of modern IT, was about watching those workloads Gelsinger was afraid of losing forever, adjust their orbits around something else -- but not Amazon.
Last November, we premiered Scale, a series of journeys deep inside the new territories of data centers and cloud platforms where today's applications and functionality are cultivated and housed. It's a world of information technology whose rules are just now being written, and whose leaders and followers are still being determined. Yet it's way too early in this series for us to embark upon a nostalgic retrospective of the times we shared.
If Scale had existed since last January, however, this recurring theme would already have come to light: The axis around which his new, hyperscale realm revolves is not a packaged computer or device, nor any branded software or application. It is the workload, as defined by its users in the enterprise as well as among consumers. If the company that lent its name to this publisher were to produce a slick, 250-page magazine that encapsulated the modern era of IT the way PC Magazine and MacUser did in the 1980s, it would be entitled "Workload."
The "VM" in "VMware" stands for virtual machine (VM). This logical device became the first vehicle for organizations to deploy applications on cloud platforms. VMs gave rise to the demand for portability -- for deploying applications essentially anywhere. But VMware was smart to stake out controlling interests in the means by which VMs were deployed and managed, making vSphere the effective overseer of all inbound roads.
Docker was the first to render portability a commodity for data centers. In 2013, it devised a container mechanism for packaging workloads independently from the bulky operating system that sustained their VMs, so they could be managed by a Linux kernel rather than a hypervisor. It was a revolutionary idea for which a multitude of organizations take credit, but which Docker deserves recognition for actually doing.
The onset of Docker was initially met by enterprises with more skepticism than we tend to acknowledge, even today. Almost immediately, the prevailing presumption was that a new market would consolidate around Docker containers, borrowing the template from the old PC world that software markets formed around formats and compatibility. Naturally, the presumption continued, there would be competing formats, followed by a market shakeout and Docker, or some larger competitor, rising from everyone else's ashes.
But portability is only virtuous when it is ubiquitous. In 2015, Docker Inc. made a bold gamble, donating its entire container format to an open source initiative under the auspices of the Linux Foundation. The gamble behind what's now called the Open Container Initiative (OCI) was this: Docker could have gone ahead and attempted to establish a value-add around the container format, at the expense of portability -- which is the main reason anyone would want to adopt it. Instead, the company would give away the format, giving competitors no clear reason to compete with it, betting its future on the value of the mechanisms that deploy and maintain containerized workloads.
Wasting zero time (as indicated by the wonky dates on the formation announcement), Google drove the formation of a separate foundation: the Cloud Native Computing Foundation (CNCF). It had several of the same members as OCI, but would be devoted to those aspects of containerized workloads where members would seek a competitive edge: deployment and management. Immediately, the CNCF advanced the cause of Kubernetes, the orchestrator that married Google's concept of workload staging (originally entitled "Borg") with Docker containers. Red Hat would completely rebuild its OpenShift PaaS platform around Kubernetes, and CoreOS would shift the focus of its business efforts towards Tectonic, its commercial implementation of Kubernetes.
"What Kubernetes lets you do is declare the state of the data center that you want to exist," explained Brandon Philips, CTO of CoreOS, which produces the Tectonic commercial distribution of Kubernetes, speaking with ZDNet. "Really, everything starts from that user's declaration. I say I want five containers to exist, and then it's really up to the machines in that cluster to go about making that work happen."
Like falling dominoes, most of the major stakeholders in containerization shifted their deployment and management strategies toward Kubernetes in 2017, including Docker Inc. itself:
From a market standpoint, all these developments point to Google having successfully seized control of all roads that lead to effective containerization in the data center. Containers are pointless without portability, and a staging and orchestration environment that is less than ubiquitous may be less than valuable to data center customers.
Microsoft was the first to demonstrate the effectiveness of capturing a market by devaluing its core asset, when it made Internet Explorer a free product, and forced Netscape to compete on the value of its platform's ancillary elements. Docker, displaying some cleverness, tried a similar tactic by opening up its container format and voiding the value of competition at that level.
But Docker had not yet built up its own value-add -- the bigger platform to which the devalued core would connect. Swarm, Docker's cluster-based orchestration platform, was immature. And even though Kubernetes may have been less mature at the time Docker made its move, it had its own compelling design element: the grouping of related containers into pods. Red Hat would help demonstrate that element first with OpenShift, which in turn made Kubernetes more attractive to open source contributors. When Google allowed the Linux Foundation to establish the CNCF almost immediately after Docker's move created the OCI, the fact that their memberships were comprised of mostly the same people, ensured that no design move on the OCI's part would catch the CNCF unaware. It guaranteed Kubernetes' place at the discussion table.
Although Kubernetes does not officially bear Google's trademark, the orchestrator does bear signs not only of its Google heritage but its parenting. The company's plan appears to be to steer the platform's further development in such a way that no containerization initiative can possibly steer around it.
Here's some ongoing developments that are certain to benefit Kubernetes in 2018:
There remains a huge challenge for Kubernetes going forward, however. Specifically, it could become so ubiquitous and so standardized that it becomes too difficult for any vendor or open source contributor to create competitive advantage around it.
F5 Networks' principal technical evangelist Lori MacVittie explained the situation this way: We are past the point in the evolution of networks and distributed systems, MacVittie believes, where proprietary methods or multi-protocol options are either viable or marketable. No one other than the organization that composed and packaged its own workload containers can, as VMware's Pat Gelsinger put it, "own the workload now and forever." On the other side of the scale, so to speak, is the danger of enforced conformance -- essentially, building a platform that is so readily commoditized that it becomes impossible for any participating vendor to gain a competitive edge.
"There's really only two options," MacVittie told ZDNet. "You could write a standard that is very well-defined, rigid, and fixed so that everyone has to conform to it in order to be interoperable. And in many cases, that is a very good thing. Or, you leave room for people to do things differently, and destroy interoperability. That's the problem with standards, when you start narrowing it down. There's only one way to build an IP packet today, because we basically narrowed it down to, 'This is how it works,' and if you fail to do that, you're the bad guy, not everybody else who's implemented the standard.
"Because things are changing so much, I think we have to leave it open," MacVittie continued. "We've got at least another five or six years before things settle down, but you've still got so much moving. Locking something down would stifle maturity and forward movement."
The jury may still be out as to whether this move will inevitably benefit Google most of all. Microsoft now has serious stakes in Kubernetes' success. With all of the big three cloud providers squarely in the Kubernetes space, each one will need to discover its own value-add -- that margin that makes its own cloud service more attractive than the others. Google may play its partnerships with Pivotal and VMware, and also with Cisco, as providing its customers with the virtue of choice.
But whenever you're a service provider, your revenue and your success comes from steering your customers along the most profitable course, through the right turnstiles at the right time. When customers have alternate routes (as Apple knows better than anyone else), the market is trickier to control.
For the data center, Kubernetes' sudden prominence means this: In the past, a public cloud-based PaaS platform such as Heroku and the original Windows Azure was only as useful as the resources and languages it supported. With Kubernetes, everything one's platform should support is inside the container, not outside. This helps to equalize the service providers somewhat, since all of them now provide the same interface, if you will, for acquiring and hosting their customers' workloads. It also narrows the room that those providers have to compete with one another on service. Whenever a market becomes commoditized, its survivors are the ones that can compete on price.
Kubernetes may have plastered completely over Docker's message in 2017. But that's no guarantee for Google, or anyone else, in 2018.
Searching for the perimeter in cloud security: From microservices to chaos
Where we encounter the problem of applying the most modern security model we have to the most capable data centers we run and discover the two may come from different eras.
Micro-fortresses everywhere: The cloud security model and the software-defined perimeter
Microservices and the invasion of the identity entities
Where we apply the solution of unimpeachable identity to the new realm of microservices, and grapple with the strange philosophical dichotomies we create in the process.
Machine learning and the spectre of the wrong solution
Where a beautiful relic of history stares us in the face to remind us that if we take all the time in the world to render our most perfect ideas, reality will leave us in the dust.