Red Hat, Ubuntu, and Docker: Container virtualization goes mainstream

Container technology, a lightweight kind of virtualization, is becoming a core component in major Linux distributions. But what role will it really plan in datacenters and the cloud?
Written by Steven Vaughan-Nichols, Senior Contributing Editor

Red Hat and Ubuntu are Linux rivals and they disagree on many technical details, but they do agree on one thing: Docker, a container technology is going to be a major virtualization technology in the years to come.

Ubuntu sees container technology and Docker to be as natural and efficient as a honeycomb. Will it be?

Linux, of course, has long had hypervisors such as its built-in KVM (Kernel Virtual Machine) and Xen but containers take a different approach to virtualization. In traditional hypervisors, the entire computing stack, from the processor to memory to storage, is virtualized. That means any hypervisor's virtual machine (VM) takes up a good deal of system resources.

A container, however, is based on a shared operating-system kernel. This, as James Bottomley, Parallels‘ CTO of server virtualization and a leading Linux kernel developer, explained at the Linux Collaboration Summit in March 2014, containers are much lighter and more efficient than hypervisors. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can "leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application."

For practical purposes, that means you can put far more applications on a single server than with any virtualization approach. And, of course, if you can put more program instances on a server, you can put more of them in your datacenter or on your cloud. The trick, of course, is to get your apps into a container in the first place. That's where Docker comes in.

On Linux, containers run on top of LXC. This is a userspace interface for the Linux kernel containment features. It includes an application programming interface (API) to enable Linux users to create and manage system or application containers. Docker can be thought of as a packaging system for LXC containerized applications. This makes it simple to deploy container applications on operating systems such as Red Hat Enterprise Linux (RHEL) 7.0 and Ubuntu 14.04 server.

Red Hat CTO Brian Stevens explained that Red Hat has jumped into this because, "the Docker technology, which helps eliminate the barriers facing enterprise adoption of containers – ease of use, application packaging and infrastructure integration – was very exciting to us. We believe that integrating Red Hat and Docker technologies offers both powerful developer capabilities and a lightweight application packaging approach for enterprise workloads across industries."

Canonical, Ubuntu's parent company, has jumped into it for similar reasons. Mark Shuttleworth, Canonical and Ubuntu's founder, said on Google+ that, LXC and Docker are "much faster and lighter than KVM!"

In a blog posting, Dustin Kirkland, Canonical's Cloud Solutions Product Manager, added that for him. Docker, is a "design pattern, [like a honeycomb], occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect. For me, Docker is just that sort of game changing, hyper-innovative technology that, at its core, somehow seems straightforward, beautiful, and obvious."

Kirkland continued, "Linux containers, repositories of popular base images, snapshots using modern copy-on-write file-system features. Brilliant,yet so simple. It's Docker.io for the win."

Not everyone is as optimistic about containers and Docker. Rob Hirschfeld, Dell's senior cloud solution architect wrote on his blog, "There are clearly a lot more great use cases for Docker but I can’t help but feel like it’s being thrown into architectural layer 'cakes' and 'markitectures' as a substitute for the non-world's 'cloud,' 'amazing,' and 'revolutionary.'"

Hirschfeld believes that Docker can be potent, even disruptive, in:

  • Creating a portable and consistent environment for dev, test and delivery
  • Helping Linux distros keep updating the kernel without breaking user space (RHEL 7 anyone?)
  • Reducing the virtualization overhead of tenant isolation (containers are lighter)
  • Reducing the virtualization overhead for DevOps developers testing multi-node deployments

"But," he continued, "I’m concerned that we’re expecting too many silver bullets." Specifically:

  • Packaging is still tricky: Creating a locked box helps solve part of downstream problem (you know what you have) but not the upstream problem (you don’t know what you depend on).
  • Container sprawl: Breaking deployments into more functional discrete parts is smart, but that means we have more parts to manage. There’s an inflection point between separation of concerns and sprawl.
  • PaaS [Platform as a Service] Adoption: Docker helps with PaaS but it does not solve neither the "you have to model your apps for a PaaS" nor the "PaaS needs scalable data services" problems

Will containers with Docker be the next great revolution in virtualization and the cloud? Or, will Docker prove to be just another path for datacenter and cloud architects to consider as they strive to get ever more programs running on the same hardware? This is the year we're going to find out. If you work in the datacenter or the cloud you'll need to start working with them to see for yourself where you think containers will fit into your plans.

Related Stories:

Editorial standards