Containers: Fundamental to the cloud's evolution

Software developers are all excited about Docker and containerization technology. But what does it mean for your enterprise and your cloud adoption plans?

Unless you've been living under a rock, chances are you've heard your software developers talking about Docker and related Containerization technologies. As a CXO though, you may have some questions and might not fully understand just what Docker and Containers actually are, and where they will add value to your existing virtual and cloud infrastructure.

To better explain and to understand the significance of this technology, it's probably a good idea to take a step back and explain how we got here in the first place.

At the dawn of modern IT, back before client-server architecture was even implemented and long before we even had PCs and the Intel x86 platform, we simply had physical machines -- mainframes and minicomputers which had multi-user capabilities.

To effectively use these resources and to chargeback customers that were sharing access to the system, the concept of "time-sharing" was invented. In many ways, this centralized computing paradigm was the genesis of what we call the modern cloud, 50 years later.

In the early 1970s IBM introduced the VM/370 operating system. This allowed mainframe computers to be physically and software partitioned, or sliced up in such a way that multiple instances of the OS, or Virtual Machines, could run each with their own separate environments, and with their own stack of applications and users.

VMs allowed mainframes to be more efficient and more easily managed.

Eventually, virtualization technology came to the Intel Architecture and PCs. Originally it was used for compatibility, such the DOS/Windows subsystem implemented in OS/2 2.0 in 1992.

In 1999 VMware introduced its first product, version 1.0 for Linux, primarily to provide a way for Windows and its applications to run in the desktop version of the OS, which at the time was lacking many native apps. VMware was also targeted as a tool for software developers who wanted to code in isolation from their running environment -- if the development VM crashed, it didn't take down the entire OS.

Read this

Microsoft to add virtualized containers, Nano Server mode to Windows Server 2016

Microsoft confirms it will have a Nano Server mode in Windows Server 2016, along with new Hyper-V containerization technology.

Along with the rapid growth of client-server in the 2000s we got server sprawl. Datacenters were filled to the brim with servers.

With the introduction of the VMware ESX hypervisor, and then later Xen, Hyper-V and KVM, many physical x86 systems were consolidated into VMs so that the total cost of ownership of a datacenter could be drastically reduced.

Instead of thousands of physical servers, the entire datacenter footprint of a modern enterprise could be reduced to a few hundred or even dozens of virtualization hosts.

Hypervisors and Virtual Infrastructure is what drives datacenters and also public cloud IaaS (Infrastructure as a Service) offerings today. But even with ongoing data center consolidation efforts, and more powerful virtualization hosts enterprise IT spend, whether it is in their own private datacenters or in provider-managed clouds is still going up.

Today IaaS infrastructure running within public clouds such as AWS and Microsoft Azure are billed on an hourly basis -- if the VMs are alive and turned on, the clock is running. That's because you are paying for hourly use of virtual cpus (vCPUs), which is a fraction of the virtualization host's physical CPU cores.

And VMs, despite all of their advantages, such as the ability to move systems and apps in-place from physical to virtual without disrupting the fundamental architecture of an existing environment, can be still very resource hungry, particularly when you start talking about memory and CPU intensive workloads such as databases.

One must remember that a VM is an entire instance of an operating system, with a kernel and device drivers that has to contend with other VMs on a hypervisor for access to system resources.

At the hosted private cloud and hyperscale public cloud level, when you are talking thousands or hundreds of thousands of virtual machines, many of which that have workloads that have been shifted away from on-premises, you start running into scalability issues.

So what's the long-term solution to VM sprawl? That solution is Containerization.

Containerization, like VM technology, also originated on big iron systems. Although it previously existed on FreeBSD as "Jailing", the first commercial implementation of containers was introduced as a feature within the Sun (now Oracle) Solaris 10 UNIX operating system as "Zones".

This technology eventually found its way into x86 Linux and Windows as Parallels (now Odin) Virtuozzo. The Open Source branch for Linux is called OpenVZ and is still being developed by the FOSS community, although much of the OSS containerization technology focus is now on LXC, which is used in tandem with Docker.

Unlike a VM, in a container you are not running a complete instance or image of an operating system, with kernels, drivers, and shared libraries.

Containers are similar to VMs in that they provide an isolated, discrete and separate space for applications to execute in memory and storage to reside and provide the appearance of an individual system, so that each container can have its own sysadmins and groups of users.

However, unlike a VM, in a container you are not running a complete instance or image of an operating system, with kernels, drivers, and shared libraries.

Instead, an entire stack of containers, whether it be dozens or hundreds or even thousands are able to run on top of a single instance of the host operating system, in a tiny fraction of a footprint of a comparable VM running the same application.

Additional containers can be spawned in microseconds, versus minutes or even longer for VMs.

The containers only contain the applications, settings and storage that is needed for that application to run. This concept is also sometimes referred to as JeOS, or "Just enough OS".

This opens up a number of possibilities, particularly as containers inherit the libraries and patches from their containerization host. It may also be desirable from a systems management perspective because once a containerization host is patched, all the containers inherit those patches as they are using memory cloned copies of the same shared libraries.

It also goes without saying that all containers or virtual environments running on a containerization host run the same version of the OS.

Unlike a hypervisor, which runs Virtual Machines, you need a host operating system to run containers or a containerization platform, such as LXC with Docker.

This is why containerization is also referred to as Operating System-level Virtualization. A Linux containerization host runs Linux containers, and a Windows containerization host runs Windows containers.

Because many containers can run within a single instance of an operating system, it is also possible for the container host itself to be a single Virtual Machine.

VMware's own recently-announced containerization strategy is based on lightweight JeOS Linux VMs.

So now finally we get to Docker, which is the containerization technology that is getting all of the attention right now. In the Linux operating system, Docker's actual containerization engine is LXC.

Docker differs from other containerization technologies described above in that it provides a way to "package" complex applications and upload them to public repositories, and then download them into public or private clouds running Docker hosts (OSes running the Docker Engine with a containerization platform) much in the same way apps are downloaded from an App Store to your smartphone or tablet.

And much like VMs which can be migrated from one virtualization host to another, a container can be easily (and more quickly) migrated from one containerization host to another.

Docker, using Swarm also provides native clustering capabilities so that containerization hosts can be grouped together.

Today, Docker is primarily a Linux-based container packaging technology. But that is quickly changing. Microsoft has adopted and partnered with Docker as its containerization packaging standard for Azure so that Linux Docker apps can run on their public cloud without any fuss.

Microsoft has also committed to a native version of the Docker Engine for Windows, a native version of the Docker Client, and will also be developing a Docker-compatible native container format for Windows, formally known as Windows Server Containers.

server-cloud-apr8-1-png-720x0.png
Image: Microsoft

Windows Server Containers will be able to run "On the metal" or within Windows Server VMs running on Hyper-V.

While an exact timeline for their appearance in Azure and Windows has not been officially announced, more details are expected at the upcoming BUILD conference.

What does this mean for you, as a CXO? It means that as our reliance on VMs decrease, containers will provide for much higher compute densities resulting in an overall decrease in the cost of cloud computing, particularly at the hyperscale level, much as what has happened with the race to the bottom on cloud storage pricing.

So as Docker and its related container technologies continue to mature, you should think about re-architecting or modernizing your LOB apps for containerization for use in public and private clouds.

Is container technology on your CXO radar? Talk Back and Let Me Know.


Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All