What is Docker and why is it so darn popular?

What is Docker and why is it so darn popular?

Summary: Docker, a new container technology, is hotter than hot because it makes it possible to get far more apps running on the same old servers and it also makes it very easy to package and ship programs. Here's what you need to know about it.

SHARE:

If you're in data center or cloud IT circles, you've been hearing about containers in general and Docker in particular non-stop for over a year now. With the release of Docker 1.0 in June, the buzz became a roar.

Docker-VM-Container

All the noise is happening because companies are adopting Docker at a remarkable rate. At OSCon in July, I ran into numerous businesses that were already moving their server applications from virtual machines (VM) to containers. Indeed, James Turnbull, Docker's VP of services and support, told me at the conference that three of the largest banks that had been using Docker in beta were moving it into production. That's a heck of a confident move for any 1.0 technology, but it's almost unheard of in the safety-first financial world.

At the same time, Docker, an open-source technology, isn't just the darling of Linux powers such as Red Hat and Canonical. Proprietary software companies such as Microsoft have also embraced Docker.

So why does everyone love containers and Docker? James Bottomley, Parallels‘ CTO of server virtualization and a leading Linux kernel developer, explained to me that VM hypervisors, such as Hyper-V, KVM, and Xen, all are "based on emulating virtual hardware. That means they’re fat in terms of system requirements."

Containers, however, use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application,” said Bottomley.

Special Feature

Virtualizing the Enterprise

Virtualizing the Enterprise

Virtualization has swept through the data center in recent years, enabling IT transformation and serving as the secret sauce behind cloud computing. Now it’s time to examine what’s next for virtualization as the data center options mature and virtualization spreads to desktops, networks, and beyond.

Therefore, according to Bottomley, with a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware.

Sounds great right? You get a lot more application bang for your server buck. So, why hasn't anyone done it before? Well, actually they have. Containers are an old idea.

Containers date back to at least the year 2000 and FreeBSD Jails. Oracle Solaris also has a similar concept called Zones while companies such as Parallels, Google, and Docker have been working in such open-source projects as OpenVZ and LXC (Linux Containers) to make containers work well and securely.

Indeed, few of you know it, but most of you have been using containers for years. Google has its own open-source, container technology lmctfy (Let Me Contain That For You). Anytime you use some of Google functionality — Search, Gmail, Google Docs, whatever — you're issued a new container.

Docker, however, is built on top of LXC. Like with any container technology, as far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on. The key difference between containers and VMs is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.

This, in turn, means that one thing hypervisors can do that containers can’t is to use different operating systems or kernels. So, for example, you can use Microsoft Azure to run both instances of Windows Server 2012 and SUSE Linux Enterprise Server, at the same time. With Docker, all containers must use the same operating system and kernel.

On the other hand, if all you want to do is get the most server application instances running on the least amount of hardware, you couldn't care less about running multiple operating system VMs. If multiple copies of the same application are what you want, then you'll love containers.

This move can save a data center or cloud provider tens-of-millions of dollars annually in power and hardware costs. It's no wonder that they're rushing to adopt Docker as fast as possible.

Docker brings several new things to the table that the earlier technologies didn't. The first is that it's made containers easier and safer to deploy and use than previous approaches. In addition, because Docker's partnering with the other container powers, including Canonical, Google, Red Hat, and Parallels, on its key open-source component libcontainer, it's brought much-needed standardization to containers.

At the same time, developers can use Docker to pack, ship, and run any application as a lightweight, portable, self sufficient LXC container that can run virtually anywhere. As Bottomley told me, "Containers gives you instant application portability."

Jay Lyman, senior analyst at 451 Research, added, "Enterprise organizations are seeking and sometimes struggling to make applications and workloads more portable and distributed in an effective, standardized and repeatable way. Just as GitHub stimulated collaboration and innovation by making source code shareable, Docker Hub, Official Repos and commercial support are helping enterprises answer this challenge by improving the way they package, deploy and manage applications."

Last, but by no means least, Docker containers are easy to deploy in a cloud. As Ben Lloyd Pearson wrote in opensource.com, "Docker has been designed in a way that it can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, or it can be used on its own to manage development environments. The primary selling point is that it simplifies many of the tasks typically done by these other applications. Specifically, Docker makes it possible to set up local development environments that are exactly like a live server, run multiple development environments from the same host that each have unique software, operating systems, and configurations, test projects on new or different servers, and allow anyone to work on the same project with the exact same settings, regardless of the local host environment."

In a nutshell, here's what Docker can do for you: It can get more applications running on the same hardware than other technologies; it makes it easy for developers to quickly create, ready-to-run containered applications; and it makes managing and deploying applications much easier. Put it all together and I can see why Docker is riding the hype cycle as fast as I can recall ever seeing an enterprise technology go. I just hope that it can live up to its promise, or there will be some really upset CEOs and CIOs out there.

Related Stories:

Topics: Cloud, Google, Open Source, Virtualization

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

39 comments
Log in or register to join the discussion
  • Docker is cool but...

    Docker is a cool product and is more flexible than Solaris zones, but will only become really usable in a production environment when it can support Live migration.
    pjc158
    • Yea, that's huge

      Yea, being able to migrate hardware out of the cluster with no downtime is pretty huge. Containers also have issues with patching. You can't independently update each one and see how everything works and maybe snapshot it back if it doesn't.
      Buster Friendly
      • container patching

        using google kubernetes or mesos you can manage containers in a cluster. You can have rolling update to containers in a kubernetes cluster.
        bibinwilson
        • You have to update whole systems

          It's not a management software issue. You have to update whole systems and all the containers on that system. Everything on a system runs on a common kernel unlike a hypervisor where each one is independent.
          Buster Friendly
  • Fake hype

    Google trends says "nope." Container based is what we used to use before systems like VMware came around especially Solaris Zones. The problem is the lack of isolation doesn't make the decrease in server cost worth it.
    Buster Friendly
    • Old news

      The LXC technology, which Docker is based from, relies on SELinux and cgroups to keep everyone in their own sandbox, thanks. You can throttle a container based on CPU, memory, disk; basically there's as much control as you'd have with a full-out VM technology like KVM, VMware or even Azure. You think there'd be this interest from the enterprise market if those controls weren't there? I expect the problem you're having is that the technology is based around Linux, given the MS-friendly comments you regularly make elsewhere....
      rkhalloran
      • Fail

        An argument involving a personal attack is an automatic fail.
        Buster Friendly
        • Logic

          Wrong. An argument BASED on personal attacks is an automatic fail. Including one in an argument has no effect on that argument whatsoever.
          Beyond that, while ad hominem is a logical fallacy in binary logic, it is not necessarily so in a gradient logic system.
          .DeusExMachina.
  • MVS has been doing this for 40 years

    The principal IBM mainframe operating system, named MVS in the mid-70s and now called z/OS has been doing something pretty close to this since it was born. Every process gets its own address space and can thus not see the others running parallel to it.

    Nice to see that everything old is new again!
    gravitron
    • A lot longer

      The OS came out in the mid-late 60s. Back then it was known as MFT/MVT! MFT and MVT became VS1 and VS2 respectively with the advent of the System 370 and they were merged into MVS - there was no significant difference between MFT/VS1 and MVT/VS2 other than the number of tasks supported.

      In 1965, IBM introduced the 360/67, a time-sharing machine with advanced H/W. It was called TSS/360 and was an offshoot from the Multics project (which also had a Unix connection). TSS/360 evolved into CP/CMS and then VM/CMS and zOS/VM. IMO, the "container" concept is much more closely related to CP/CMS than to MVS. CP was the OS or Control Program and was isolated from CMS. You could have multiple CMSes run on top of CP. Each "CMS was like a users own isolated computer - had its HDs, terminal(s) and other resources like shared R/O libraries, compilers, etc. In the early days, you could also run the other IBM OSes on top of CP - or any other OS that would run on the 360/370 architecture. Each guest (CMS - DOS/360 - OS/360 - etc.) could communicate with the other guests via an interprocess communication mechanism (analogous to TCP/IP) - which allowed for client/server applications (e.g., SQL/DS, DB/2, etc.).
      bobc4012@...
  • Docker is a step backward.

    True VMs not only isolate the processes more completely, but they allow you to run multiple completely different OSes. The complete isolation of the processes allows the host OS to shut down crashed VMs without affecting the rest of the VMs currently running. With all of the processes sharing one OS in Docker, you run the risk of one process corrupting the shared OS and bringing all of the processes down at once. Old school OSes used a similar approach to Docker because machine resources were very limited back then, so the primary concern was the efficient use of limited resources. Once computing resources expanded to the point where efficient use was less important than durability and security, we saw the rise of VM dominance. So, the case where Docker becomes useful is when you have really old hardware with very limited resources and a bunch of OS native applications to run. It's like comparing the original Windows which ran on top of DOS to the modern Windows hosting multiple OSes. They may have some small superficial resemblance to a user but under the hood, they're nothing alike. I don't see anyone with any sense choosing Docker over real VMs.
    BillDem
    • The advantage is speed...

      There is less overhead involved by avoiding a full VM.

      Whether you like it or not, a lot of the old mainframe technology and techniques have been merged into Linux.

      Docker is just another technique for performing the equivalent of a VM, but with the restriction that the OS must be the same.

      Personally, I like VMs - it is an excellent place to test really unusual things - like an alien/experimental OS. Any file damages are contained within the VM.

      Linux based systems already have good throughput, with good isolation between processes. VMs provide a good isolation between operating systems. Docker cannot.

      For production use, docker is a big win.
      jessepollard
      • Not enough of an advantage

        It's also been in Solaris since 2005 and has got a lot of use. We've just found a VMware environment is so much more flexible that it's worth the extra hardware resources to run independent kernels on top of a hypervisor.
        Buster Friendly
        • Depends on what you are doing.

          Running multiple versions of an OS or a different OS, you bet. VMs beat containers.

          But running the same OS, just to run a single application/service... that is a waste, requiring multiple support service, multiple configurations, a lot more network overhead ...

          Just because Windows can't handle multiple services at the same time is no reason to do the same crap.
          jessepollard
          • What application?

            What application would you do that? If it's a pure number cruncher, isolation isn't necessary at all.
            Buster Friendly
          • Hosted application servers.

            Where you want to provide hosted LibreOffice, or hosted Git servers, or hosted database server, or hosted desktop environment instances for commercial subscribers, obviously you cannot have them sharing the same environment.
            Mah
          • Sure it can

            "Just because Windows can't handle multiple services at the same time..."
            ye
          • Yep

            We run multiple mail servers and PostgreSQL servers on the same installation of Windows without any problems at all. If you encounter limitations with this, it's all down to the software itself, not the OS.
            pscs
  • Is there user mapping going on?

    Is "root" in one container the same root user as in all the other containers? I.e. is uid 0 mapped to some other user or is it the same user seen from kernel perspective and merely constrained by the docker engine?
    honeymonster
    • I believe it all part of the host OS.

      That doesn't change and is shared by all the containers, thus root would be the same user.

      Docker is not a VM. The container only contains the applications.
      jessepollard