X
Business

VMware finally decides Kubernetes and vSphere should share a room

The road is being paved for the world’s leading virtualization platform in the enterprise to transition to a system that not only runs Kubernetes, but runs on it. But it’s not the first road VMware has made for itself.
Written by Scott Fulton III, Contributor

Suppose Kubernetes was someone's proprietary, commercial software platform ­­­­­­-- hypothetically speaking, something created internally at a startup software company and then delivered to the data center community in a shrink-wrapped box with a minimum 50-user license fee. Now suppose, again hypothetically, that startup was acquired by VMware. The first product to emerge from that acquisition probably couldn't look very much different than the first glimpse of "Project Pacific" given to attendees at VMworld 2019 in San Francisco.

190901-vmworld-pat-gelsinger-joe-beda.jpg

VMware CEO Pat Gelsinger embracing Kubernetes co-creator Joe Beda at VMworld 2019.

VMware

Granted, the modern state of Kubernetes is very much the product of the type of innovation that can only be nurtured by the open-source development community. But take even the briefest tour of VMworld, and you'd probably walk into Joe Beda and Craig McLuckie -- two of Kubernetes' three acclaimed creators from Google -- now engineers employed by VMware. And the star of this year's show was the evolved form of vSphere, the platform that hosts a solid majority of the world's data center workloads, now in mid-transformation into a Kubernetes distribution in itself. One day soon, you would have heard several times, vSphere would sport custom resource extensions that will enable its own Kubernetes engine to continue managing the virtual machines vSphere has always managed.

Wait, Scott, say that again? Instead of extending vSphere to incorporate Kubernetes, VMware is re-architecting its own Kubernetes, extending it to behave and perform like vSphere.

Kubernetes may yet become history's most successful open-source initiative, surpassing even Linux. Yes, VMware does not own Kubernetes, nor can it. But if vSphere achieves this goal, will that fact even matter to enterprises using vSphere?

If Bill Gates and Steve Ballmer were still running Microsoft, they would be bowing at VMware CEO Pat Gelsinger's feet.

"We don't see ourselves being in what you might call the 'application platform space,'" VMware vice president and CTO Kit Colbert declared at this same show just three years earlier. If that were indeed the case, someone has certainly dragged it there.

"What if we could manage virtual machines using the Kubernetes interface?" asked Colbert rhetorically last September 2.

"There are going to be more apps provisioned in the next five years than there were in the past 40," he told his audience. "It's pretty astonishing, right? We're talking about hundreds of millions of workloads. And these workloads are going to look very different than they traditionally have. There's going to be many more of them, smaller, more distributed, more data-centric, using all sorts of new capabilities such as AI. The nature of the app is fundamentally changing."

Not four minutes into his presentation, Colbert presented a slide clearly depicting vSphere as an applications platform, not excluded from the application platform space but rather owning it. The pivot was complete.

Embrace or be embraced

A containerized platform can run in any data center and has always been able to. A great many enterprises have deployed such platforms under the protected enclosure of a first-generation virtual machine (VM). VMware has helped many a customer run Kubernetes (or Docker Swarm, or whatever orchestrator) inside a VM, managed by vSphere. If there's a security issue with containers, it's completely eradicated by the isolation layer between their collective VM and the hypervisor.

Why isn't that enough? Why isn't this enough co-existence to suit software developers? Here's what we've heard from people who've experienced the answers for themselves:

  • The performance advantage is lost. The speed gains organizations have seen from deploying a container platform as their base infrastructure, are whittled down somewhat as the hosting of that platform is delegated to a hypervisor.
  • You can't build microservices with this setup. In a full-scale microservices environment, containers representing instances of functions can be scaled up or down as traffic and demand allow. This level of scalability requires a very functional and adaptable network, which is hard to achieve in a situation where an overlay rests atop an overlay atop the network layer.
  • There's no good way to automate software deployment and lifecycle management. One of the big advantages of containerized applications and services is how easily they can be deployed and managed, using platforms that are geared for building container images directly around working source code. This is the system of continuous integration / continuous delivery (or "deployment," in either case CI/CD) you may have read about.

In 2014, in response to a Gartner analyst conference session questioning whether VMs constituted a dying technology, VMware CTO Kit Colbert penned a company blog post arguing the benefits of running a container environment such as Docker inside a VM. Containers are not at all bad, Colbert asserted, but the environments that managed them at the time were unproven. "Running containers inside VMs," he wrote, "brings all of the well-known VM benefits: the proven isolation and security properties. . . plus mobility, dynamic virtual networking, software-defined storage, and the massive ecosystem of third-party tools built on top of VMs."

Colbert didn't deny that enterprises would be running containers within their digital infrastructure; the only issue was how.

The following year, as the various contributors to the OpenStack hybrid cloud platform expressed their trepidation that container platforms like Docker and Kubernetes could eat their lunch, VMware curiously dipped its toes in these new waters. It began by creating its own Linux distribution called Photon, which some saw as an opportunity for VMware to later inject some kind of vSphere-oriented agent into Linux containers.

2015-vsphere-integrated-containers.jpg

The first incarnation of VMware's co-existence plan for containers, circa 2015.

VMware

Something like that happened right away, with the introduction of what the company called vSphere Integrated Containers (VIC). Strangely, any technical explanation of VIC depended upon which VMware engineer you asked. As the project's original lead engineer told me for The New Stack, VMware's first entry into containerization was a system for controlling both first-generation virtual machine images and Docker container images, through actually re-engineering Docker to recognize the former as though it were the latter. Containers looked like VMs to vSphere because the VIC system added "just enough VM" (jeVM) to the component to pull off the disguise (the discrepancy among engineers was whether jeVM was added to the container image or near it, which was actually a big deal). Developers using Docker, and theoretically operators using vSphere, would not notice much difference, if any. Kubernetes was not on the VIC team's radar at that point.

As VMware engineers and product managers acknowledged at the time, though, the resulting system wasn't quite geared for microservices -- the optimum environment for containerization, where monolithic applications are replaced with dynamic, scalable quantities of distributed functions.

160901-ray-ofarrell.jpg

VMware CTO Ray O'Farrell at VMworld 2016.

Scott Fulton

Elsewhere on the VMware campus, at about the same time, engineers were building the second effort at integration. This was Project Photon, which would build a separate management platform for containers -- not vSphere or a vSphere add-on -- whose host would be the traditional hypervisor, not the Linux operating system. Theoretically, this would resolve one of data center veterans' most credible objections to container environments of the time: the latent ability for one container hosted on a Linux OS to access the file systems of all the other containers hosted by that same OS. And as another theory would assert, a Project Photon environment could be orchestrated by Kubernetes, because container images would not differ in structure or format from those it would normally expect. 

Both VIC and Photon shared the same underlying thesis: No enterprise has a viable business reason for wanting to give up its VM platform. Certainly, VMware, which relies on such a platform for its livelihood, isn't anxious to invent a reason. But if a new and arguably better way to deploy applications were to capture new enterprise customers that VMware hadn't tapped yet, vSphere's growth could become stunted. So co-existence can't be the only virtue justifying a vSphere + Kubernetes hybrid platform, especially if the sum of these parts ends up being lesser than one or the other whole.

170828-pat-gelsinger.jpg

VMware CEO Pat Gelsinger at VMworld 2017.

Scott Fulton

In 2017, VMware adopted a different tack: It permitted its sister company at the time, Pivotal, to introduce its cloud-based Pivotal Container Service (PKS) on its VMworld stage. This was a containerized application deployment and management platform built around a different project that had been incubated for Cloud Foundry, originally called Kubo. The idea was to create an automation path for next-generation, "cloud-native" deployment -- a way to move from development to staging to production in a smoothly controlled fashion, all within Google Cloud. Already, VMware was speaking about Photon and VIC in the past tense, and about coexistence as a virtue that doesn't need to be confined to just one environment. Later, VMware would attach its own brand to PKS, and then earlier this year, acquire Pivotal outright.

Coexistence in this model would have PKS residing alongside vSphere, for what engineers were calling a "single pane of glass," but one stretched very wide to encompass two worlds.

The first way forward

"A few years ago, we had this insight that maybe Kubernetes could be more of the solution than we thought," explained Jared Rosoff, VMware's senior director of product management for Project Pacific, during a VMworld 2019 session last September 5.

VMware's latest Kubernetes play is actually a big leap back in the direction it originally started heading, like a sequel of a sequel that realized that successors should reflect their predecessors' visions at least a tad. Project Pacific is a reboot of the old favorite software vendor strategy, embrace and extend. Yes, it involved an acquisition -- mainly that of Kubernetes platform maker Heptio in November 2018, from which VMware obtained the services of Joe Beda. But that was a cooler, more stealthy, maneuver than any of the great platform plays of the commercial software era, such as Oracle's acquisition of Sun Microsystems a decade earlier.

What Rosoff appears to like best about Kubernetes is how it enables an emerging concept in the data center: one where the operator writes a script that declares the optimum state of the infrastructure, and the system does its best to accommodate that desired state. The popular phrase is infrastructure-as-code, though it's notable that Rosoff, like the greatest Greek navigators of history, steered clear of actually uttering that phrase.

"Kubernetes has this idea of desired state," explained Rosoff. "At its core, fundamentally, is a desired state control plane. And it's got a database, where I give it a document that says, 'This is the desired state of my system,' and there are controllers that hook into that database, and continually drive the infrastructure towards that desired state. That pattern is actually built into Kubernetes, as a generalized pattern."

It may not be obvious to everyone what Rosoff is talking about here, so let's go over this once again: A Kubernetes cluster is a loose assembly of servers, pooled together to make room for containers and the data they'll use. Back when the software was installed at the base level of the server (the "bare metal"), the configuration of that server was obvious. If you needed it changed, you had someone take it apart with a screwdriver to add some DIMMs and a bigger hard drive. But a server cluster is a different beast. It's part on-premises physical servers and part public cloud-based virtual servers. Its collective configuration is a bunch of variables. Conceivably, those variables could end up being set to whatever the sum total of every application's resource requirements, ends up being.

In a more perfect world, however, that configuration is something an IT administrator may explicitly specify. Kubernetes looks for these patterns, and while one way of entering them is through the kind of browser-based portal (or as AWS calls it, "wizard") with which an administrator fills in forms, veteran Kubernetes operators (now that such people do exist) prefer typing declarations into a script, and feeding them into a command line. Kubernetes' command-line tool is called kubectl (pronounced "koob · cuddle"), and it's the one component most noticeably missing from all the VMware engineers' discussions of coexistence up until this month.

Rosoff spelled out his team's vision of a single tool that looks and feels like the portal-based vSphere that IT operations specialists have come to know, but that also includes kubectl. It would work exactly like the tool that container orchestration engineers have come to rely upon, although by way of an extension mechanism that Kubernetes contributors, not VMware engineers, built into their own system, it would also effectively orchestrate virtual machine-driven environments as well.

190905-vmworld-pacific-preso-01.jpg

Today, ESXi is the underlying host for all VMware virtual machines. Several virtual servers may be clustered together by VMware's vCenter, for what it calls a VC cluster. In the Project Pacific environment (still under development, thus the term "Project"), the control agent that Kubernetes would normally inject into each server node, called the kubelet, is injected into ESXi as a native, non-virtualized, process. The result is what Rosoff called a spherelet -- a VC cluster counterpart to the kubelet in a Kubernetes cluster.

Kubernetes perceives each spherelet as a kubelet. So not only is vSphere given a broader insight into two worlds, but so is kubectl.

"Never before has there been a direct interface into vSphere," Rosoff remarked, "that a developer could really meaningfully use to self-service access to resources. It was always some add-on that I had to deploy on top of vCenter, a layered system. What this means is, as a developer now, I can send a request directly to the system, and have it deploy resources."

What VIC had enabled as an add-on mechanism to a container environment, the product manager explained, Pacific moved directly into the kernel of its orchestrator mechanism, for what it calls a CRX. Like a VMX that runs a virtual machine on a hypervisor, the CRX runs a container on the same hypervisor.

An editor for a more engineering-minded journal would tell me I might have led with that.

This means that the same process isolation afforded to a VM, which is the key to its relative level of security, is attainable the same way by a container.

190905-vmworld-pacific-preso-02.jpg

The orchestrator that perceives the spherelets in ESXi, as well as elsewhere in the system, and that effectively stands up vSphere as a Kubernetes platform, is what Pacific calls the supervisor cluster. This is to distinguish the orchestrator behind Pacific from any number of other orchestrators spun up by vSphere users for customer-facing applications, on a separate plane where infrastructure resources cannot be reached. At this higher level, Kubernetes-managed containers and traditional VMs are defined within their own namespaces. The way Kubernetes was intentionally, originally engineered, a namespace is an abstract way to represent whatever it is that it orchestrates, containers being just one example. Now VMs represent another, and in a stunningly understated teaser of coming attractions, Rosoff's diagram leaves open a third namespace for components such as Microsoft SQL Server, the Apache Cassandra database, and the TensorFlow machine learning framework. Such components could be enabled through a Kubernetes mechanism called controllers.

"There's a huge catalog of operators out there," said Rosoff. "Databases and messaging and middleware. We've been testing operators with things like TensorFlow for AI and ML toolkits. All these things we can now run as control plane extensions of the supervisor. This is going to be a really powerful area to watch, because what it means is that VMware, the partner ecosystem, and even your own company can create these extensions that run natively in the control plane of vSphere, and get presented as first-class objects to your developers, and appear as first-class objects inside of vCenter."

Objects in mirror may be closer than they appear

Inside the confines of VMworld, with its vSphere and now spherelets, it may often seem as though the ESX infrastructure has spread in all directions into infinity. But something kicked VMware into high gear, returning it to the goal of full vSphere integration that it appeared to be steering clear of as recently as last year, when CEO Pat Gelsinger told his audience the best place to run Kubernetes was inside a VM.

But Jared Rosoff's invocation of the desired state mechanism, coupled with the conspicuous absence of the phrase "infrastructure-as-code," suggests a mysterious presence in that absence. This week in Seattle, a company you may have heard about once or twice here called HashiCorp held its own three-day conference. HashiCorp produces a kind of infrastructure orchestration system called Terraform. Its purpose is to let operators declare the optimum state of their data centers through scripts.

Typically, Terraform and vSphere have not been brought up together in analysts' conversations. They can, after all, work interactively together, not just coexisting but collaborating, with vSphere serving as a resource provider for Terraform's provisioning system. But Terraform also interacts with Kubernetes, which means that HashiCorp is inhabiting a space in the data center that VMware would prefer that Project Pacific occupied.

Last Monday, HashiCorp announced an extension to its partnership with Microsoft, enabling clusters featuring its Consul service mesh to be provisioned on Azure. It's the type and level of partnership win that makes one realize that VMware is not entering an unsettled frontier with its latest push more deeply into Kubernetes.

Project Pacific may become almost everything VMware could have dreamed of for itself, had it envisioned acquiring Kubernetes outright four or five years earlier. But VMware will also have acquired something else along with it: relentless competition. It won't be able to contain that fact within an isolated layer of abstraction for very long.

Learn more -- From the CBS Interactive Network

Elsewhere

Editorial standards