Docker is hotter than hot because it makes it possible to get far more apps running on the same old servers and it also makes it very easy to package and ship programs. Here's what you need to know about it.
Five years ago, Solomon Hykes helped found a business, Docker, which sought to make containers easy to use. With the release of Docker 1.0 in June 2014, the buzz became a roar. And, over the years, it's only got louder.
All the noise is happening because companies are adopting Docker at a remarkable rate. In July 2014 at OSCon, I ran into numerous businesses that had already moved their server applications from virtual machines (VM) to containers.
Indeed, James Turnbull, then Docker's VP of services and support, told me at the conference that three of its largest beta bank customers were moving it into production. That's a heck of a confident move for any 1.0 technology, but it's almost unheard of in the safety-first financial world.
Today, Docker, and its open-source father now named Moby, is bigger than ever. According to Docker, over 3.5 million applications have been placed in containers using Docker technology and over 37 billion containerized applications have been downloaded.
It's not just Docker who thinks they're on to something big. 451 Research also sees Docker technology being wildly successful. It predicts "the application container market will explode over the next five years. Annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent."
So why does everyone love containers and Docker? James Bottomley, formerly Parallels' CTO of server virtualization and a leading Linux kernel developer, explained VM hypervisors, such as Hyper-V, KVM, and Xen, all are "based on emulating virtual hardware. That means they're fat in terms of system requirements."
Containers, however, use shared operating systems. This means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can "leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application," said Bottomley.
Therefore, according to Bottomley, with a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware.
Docker enables developers to easily pack, ship, and run any application as a lightweight, portable, self-sufficient container, which can run virtually anywhere. As Bottomley told me, "Containers gives you instant application portability."
Jay Lyman, senior analyst at 451 Research, added: "Enterprise organizations are seeking and sometimes struggling to make applications and workloads more portable and distributed in an effective, standardized, and repeatable way. Just as GitHub stimulated collaboration and innovation by making source code shareable, Docker Hub, Official Repos, and commercial support are helping enterprises answer this challenge by improving the way they package, deploy, and manage applications."
In addition, Docker containers are easy to deploy in a cloud. As Ben Lloyd Pearson wrote in Opensource.com: "Docker has been designed in a way that it can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, or it can be used on its own to manage development environments."
Specifically, for CI/CD Docker makes it possible to set up local development environments that are exactly like a live server; run multiple development environments from the same host with unique software, operating systems, and configurations; test projects on new or different servers; and allow anyone to work on the same project with the exact same settings, regardless of the local host environment. This enables developers to run the test suites, which are vital to CI/CD, to quickly see if a newly made change works properly.
By using CI/CD, according to a 2016 Puppet survey of 4,600 IT professionals, IT departments with a strong DevOps workflow deployed software 200 times more frequently than low-performing IT departments. Moreover, they recovered 24 times faster, and had three times lower rates of change failure. Simultaneously, these businesses are spending 50 percent less time overall addressing security issues, and 22 percent less time on unplanned work.
All this comes as no surprise that the most popular way to deliver applications via CI/CD are containers.
What's not to like? You get a lot more application bang for your server buck and you improve and deploy your software quicker than ever before. So, why hasn't anyone done it before? Well, actually they have. Containers are an old idea.
Indeed, few of you know it, but most of you have been using containers for years. Google has its own open-source, container technology lmctfy (Let Me Contain That For You). Any time you use some of Google functionality -- Search, Gmail, Google Docs, whatever -- you're issued a new container.
Docker, however, is built on top of LXC. Like with any container technology, as far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on. The key difference between containers and VMs is while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.
This, in turn, means one thing VM hypervisors can do that containers can't is to use different operating systems or kernels. So, for example, you can use Microsoft Azure to run both instances of Windows Server 2012 and SUSE Linux Enterprise Server, at the same time. With Docker, all containers must use the same operating system and kernel.
On the other hand, if all you want to do is get the most server application instances running on the least amount of hardware, you couldn't care less about running multiple operating system VMs. If multiple copies of the same application are what you want, then you'll love containers.
This move can save a data center or cloud provider tens of millions of dollars annually in power and hardware costs. It's no wonder they're rushing to adopt Docker as fast as possible.
Docker brings several new things to the table that the earlier technologies didn't. The first is it's made containers easier and safer to deploy and use than previous approaches. In addition, because Docker's partnering with the other container powers, including Canonical, Google, Red Hat, and Parallels, on its key open-source component libcontainer, it's brought much-needed standardization to containers.
So, today Docker doesn't have any rivals per se. True, there are other LXC-based container implementations as CoreOS, now Red Hat's, Rkt, or Canonical's LXD, but they aren't so much competitors as they are LXC refinements. That said, you can run Docker containers on essentially any operating system or cloud. This gives it an advantage over the others.
In the level above containers, container orchestration, Docker does has a serious competitor: Kubernetes.
Like any other element of your IT infrastructure, containers need to be monitored and controlled. Otherwise, you literally have no idea what's running on your servers.
You can use DevOps programs to deploy and monitor Docker containers but they're not optimized for containers. As DataDog, a cloud-monitoring company, points out in its report on real-world Docker adoption, "Containers' short lifetimes and increased density have significant implications for infrastructure monitoring. They represent an order-of-magnitude increase in the number of things that need to be individually monitored."
The answer is cloud orchestration tools. These monitor and manage container clustering and scheduling. In May 2017, there were three major cloud container orchestration programs: Docker Swarm, Kubernetes, and Mesosphere. Today, these are all still around, but Kubernetes is by far the most dominant cloud-orchestration program.
Docker knew this was coming. Hykes said at DockerCon EU in Copenhagen that the company added Kubernetes to its offerings because it gave "our users and customers the ability to make an orchestration choice with the added security, management, and end-to-end Docker experience that they've come to expect from Docker since the very beginning."
Still, while Kubernetes may be the container orchestration winner, the containers themselves remain largely Docker's design and run on containerd. Docker's technology will be with us for years to come.
In a nutshell, here's what Docker can do for you: It can get more applications running on the same hardware than other technologies; it makes it easy for developers to quickly create ready-to-run containered applications; and it makes managing and deploying applications much easier. Put it all together and I can see why Docker rode the hype cycle as fast as I can recall ever seeing an enterprise technology go.
Moreover, for once the reality is living up to the hype. Frankly, I can't think of a single company of any size that's not at least looking into moving their server applications to containers in general and Docker in specific.