What Docker does right and what it doesn't do right... yet

What Docker does right and what it doesn't do right... yet

Summary: Docker founder Solomon Hykes explained at LinuxCon what Docker is, what it does right, and what it still needs to work on.

SHARE:

CHICAGO — Docker founder Solomon Hykes, opened his keynote at LinuxCon by saying he knows two things about Docker: "It uses Linux containers and the Internet won't shut up about it." He knows more than that. He told the audience what Docker is, what it does right today, and what it still needs to do to be better than it is today.

Solomon Hykes
Docker Founder Solomon Hykes

Docker's container technology has exploded into the hottest trend both in open source development and business deployment circles in years. That's not hype. That's reality. Docker, which is just over a year old, is being used and deployed  by service providers, configuration management programs, big data analysts, system integrators, and with operating systems both within Linux (Red Hat, Canonical and openSUSE) and outside it (Microsoft).

Hykes doesn't know why it's so popular, but he does have some ideas. "My personal theory," he said, "is that it was in the right place at the right time for a trend that's much bigger than Docker, and that is very important for all of us, that has to do with how applications are built."

By this, Hykes explained, "Users expect online applications to behave like the Internet." That is to say, they are "always on and globally available. For developers, that's a big problem. They must now figure out how to decouple their applications from the underlying hardware and run it on multiple machines anywhere in the world."

If you think you've heard this idea before, you have. Docker is enabling what Nicholas Carr, the famous technology analyst and writer, predicted in his 2008 book, The Big Switch: Rewiring the world, from Edison to Google. Carr said that in the future all end-users will need is a screen and an Internet connection. Devices? End-user operating systems? Local applications? They don't matter. All that counts is a way to interact with the Internet of utility computing services.

Special Feature

Next Generation Networks

Next Generation Networks

The rising tides of big data, video, and cloud computing are driving tremendous demand for faster and more efficient networks. We delve into how things like software-defined networks (SDN) and new wireless technologies are enabling business transformation.

That's great for end-users but as Hykes pointed out, "Everyone is looking for a standardized way to build distributed applications in a way that leverages the available system technologies but packages them in a way that's accessible to application developers." That's where Docker steps in.

Docker frees programmers from perpetually finding ways to deliver an always on service application that's globally available from any device. With Docker, developers can build their programs on their own terms and Docker provides the simple, loosely coupled tools needed to package these omnipresent programs.

Hykes continued: "Docker is sort of a packaging system of its own. It specifies, from source, how to create a tarball with extra metadata, and versioning and a way of transferring a new version with minimal overhead."

If that's all it was, no one would care that much. There are lots of vanilla packaging systems out there. The hot fudge on this sundae is that Docker offers a "sandboxed runtime, which is built upon key Linux kernel features, including cgroups and namespaces. This provides more certainty for application developers by providing a set of known abstractions that define how the application will run, no matter what hardware is underneath."

In short, Docker provides a one-stop, easy way to deliver not only programs to be installed, but programs that are ready to run on your servers in their own containers. It makes getting server programs delivered and up and running simple.

These programming features, along with the simple fact that Docker containers enable datacenter managers and cloud managers to get many more application instances to run on a single hardware server than virtualization can, that has made Docker a winner. Hykes doesn't think that's enough. He said there are many more issues to address.

For starters, Hykes said, "We need better networking between containers. Many applications won't fit into one container so how containers communicate with each other is very important." Linux has many ways to address networking and while an IP address and a port is fine for the basics, "with hundreds of dynamic containers and components, Docker needs service discovery." Hykes added that Docker is looking at building on DNS services as a way to deal with this problem.

In addition, Hykes said that Docker needs to provide clustering and composition (aka DevOps), because, "When using Docker you hit two problems. One is that you're not running containers on one machine at a time; you want to run on a cluster of machines." You want to use DevOps so that you can organize applications made up of lots of different components running across multiple containers and servers. This will be a "a major focus of the next three or four releases of Docker."

Another problem is making it so that system administrators can look at a container and decide if they can trust it. "How can I trace back a container to a specific Git hatch, specific software components?"  In short, he said, "We need to be able to sign containers. We're building this right now. This will be in Docker 1.3, September  release," Hykes concluded.

So, while Docker is already wildly popular, Hykes was quick to point out that it still needs major improvements to be all that it can be. Docker's developers are not sitting back and enjoying their success. They're hard at work making it even better.

Related Stories:

 

 

Topics: Enterprise Software, Linux, Open Source, Software Development

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

5 comments
Log in or register to join the discussion
  • What Docker does right and what it doesn't do right... yet

    Work in progress.....
    daikon
  • Why the hype

    Why the hype for last decade's technology?
    Buster Friendly
  • Not really useful yet...

    Until containers can be live migrated from one system to another then they are a nice to have.

    With a VM (I know it is a fat solution for an application) when you have performance issues or maintenance needed you can live migrate them to another system without down time.
    pjc158
  • Not a new idea, just rehashed

    Docker is becoming dangerously close to requiring systemd, which is an abomination. All my sysadmin friends are looking to move away from Linux because of systemd. For Docker to be successful, it needs to avoid reliance on systemd.
    ncted
  • Focusing on what matters

    Docker provides an "application container" which means that only one process / service can be run. As a result, it does not provide a virtual-machine like full Linux distro. The best you can do is run multiple, single-service containers strung together with internal network connections... and then expose one or more container ports to be accessible from the host node to reach the outside world. Things like logging have to be done with bind-mounts to the host node and things like container root user are still considered to be a security risk. What Docker needs is to be more of a full container (like OpenVZ for example)... rather than improving the Docker-only mechanisms that string the pieces together. The disk layering and deployment speed are nice, but other than that, not so much.

    With regards to the sub-set of users who think that systemd is an "abomination" and who are "looking to move away from Linux"... please do... good riddance I say. :)
    dowdle