CHICAGO — Docker founder Solomon Hykes, opened his keynote at LinuxCon by saying he knows two things about Docker: "It uses Linux containers and the Internet won't shut up about it." He knows more than that. He told the audience what Docker is, what it does right today, and what it still needs to do to be better than it is today.
Docker's container technology has exploded into the hottest trend both in open source development and business deployment circles in years. That's not hype. That's reality. Docker, which is just over a year old, is being used and deployed by service providers, configuration management programs, big data analysts, system integrators, and with operating systems both within Linux (Red Hat, Canonical and openSUSE) and outside it (Microsoft).
Hykes doesn't know why it's so popular, but he does have some ideas. "My personal theory," he said, "is that it was in the right place at the right time for a trend that's much bigger than Docker, and that is very important for all of us, that has to do with how applications are built."
By this, Hykes explained, "Users expect online applications to behave like the Internet." That is to say, they are "always on and globally available. For developers, that's a big problem. They must now figure out how to decouple their applications from the underlying hardware and run it on multiple machines anywhere in the world."
If you think you've heard this idea before, you have. Docker is enabling what Nicholas Carr, the famous technology analyst and writer, predicted in his 2008 book, The Big Switch: Rewiring the world, from Edison to Google. Carr said that in the future all end-users will need is a screen and an Internet connection. Devices? End-user operating systems? Local applications? They don't matter. All that counts is a way to interact with the Internet of utility computing services.
That's great for end-users but as Hykes pointed out, "Everyone is looking for a standardized way to build distributed applications in a way that leverages the available system technologies but packages them in a way that's accessible to application developers." That's where Docker steps in.
Docker frees programmers from perpetually finding ways to deliver an always on service application that's globally available from any device. With Docker, developers can build their programs on their own terms and Docker provides the simple, loosely coupled tools needed to package these omnipresent programs.
Hykes continued: "Docker is sort of a packaging system of its own. It specifies, from source, how to create a tarball with extra metadata, and versioning and a way of transferring a new version with minimal overhead."
If that's all it was, no one would care that much. There are lots of vanilla packaging systems out there. The hot fudge on this sundae is that Docker offers a "sandboxed runtime, which is built upon key Linux kernel features, including cgroups and namespaces. This provides more certainty for application developers by providing a set of known abstractions that define how the application will run, no matter what hardware is underneath."
In short, Docker provides a one-stop, easy way to deliver not only programs to be installed, but programs that are ready to run on your servers in their own containers. It makes getting server programs delivered and up and running simple.
These programming features, along with the simple fact that Docker containers enable datacenter managers and cloud managers to get many more application instances to run on a single hardware server than virtualization can, that has made Docker a winner. Hykes doesn't think that's enough. He said there are many more issues to address.
For starters, Hykes said, "We need better networking between containers. Many applications won't fit into one container so how containers communicate with each other is very important." Linux has many ways to address networking and while an IP address and a port is fine for the basics, "with hundreds of dynamic containers and components, Docker needs service discovery." Hykes added that Docker is looking at building on DNS services as a way to deal with this problem.
In addition, Hykes said that Docker needs to provide clustering and composition (aka DevOps), because, "When using Docker you hit two problems. One is that you're not running containers on one machine at a time; you want to run on a cluster of machines." You want to use DevOps so that you can organize applications made up of lots of different components running across multiple containers and servers. This will be a "a major focus of the next three or four releases of Docker."
Another problem is making it so that system administrators can look at a container and decide if they can trust it. "How can I trace back a container to a specific Git hatch, specific software components?" In short, he said, "We need to be able to sign containers. We're building this right now. This will be in Docker 1.3, September release," Hykes concluded.
So, while Docker is already wildly popular, Hykes was quick to point out that it still needs major improvements to be all that it can be. Docker's developers are not sitting back and enjoying their success. They're hard at work making it even better.