X
Business

Docker CEO: Why it just got easier to run multi-container apps

The launch of new orchestration tools will simplify things for anyone trying to create and manage distributed apps made of multiple containers, according to Docker CEO Ben Golub.
Written by Toby Wolpe, Contributor
BenGolubDocker2014DecA220x256
Ben Golub: Multiple Docker components on multiple servers Image: Docker

Container company Docker has unveiled the services it says represent the most important developments for the open-source technology over the next 12 months.

At the DockerCon EU conference in Amsterdam, the firm demonstrated a new set of Docker orchestration tools and APIs, along with the on-premise version of its proprietary Docker Hub Enterprise management suite.

"They're hugely significant because ultimately the future of applications really is distributed applications, where there are multiple Dockerised components running across multiple servers," Docker CEO Ben Golub said.

"Docker orchestration is the most significant thing that's happening over the next year on the open-source project and Docker Hub Enterprise is the most significant that's happening for us as a company in terms of products that we'll be generating revenue from."

By automating the creation and deployment of apps in containers -- a lighter-weight form of virtualisation -- Docker is designed to free developers from software and infrastructure dependencies, cutting costs and creating efficiencies in the process.

Docker orchestration consists of three sets of open-source APIs -- Docker Machine, Docker Compose and Docker Swarm. They each address various aspects of the problem of creating and managing multi-container distributed applications.

"Docker Machine lets you take any host -- really any server whether it's a laptop, a server or VM in your datacentre, or a remote cloud instance on Amazon or Azure or any of a dozen other places -- and make them able to run any Docker application," Golub said.

"So this is setting them up to run Docker and then giving you the ability to manage all those containers on all those hosts with the same UI, whether they're local or remote."

Docker Swarm is a clustering service that allows users to treat large numbers of servers as a single machine, creating a resource pool for distributed applications.

Because the Swarm API supports pluggable clustering implementations, users can opt to employ highly-scalable products such as Mesosphere to orchestrate containers across large numbers of nodes.

The third element of the orchestration services, Docker Compose, is designed to make it easy to compose a complex distributed app from a number of containers -- such as front-end, database and load-balancing components. The resulting apps are entirely portable, the company said.

Docker Hub Enterprise is the on-premise version of the commercial cloud command centre launched in June, providing tools for creating libraries of Dockerised components for workflow and integration with other tools that an enterprise may already be using for, say, monitoring.

"Both sets of announcements really relate to the notion of multi-container applications. When Docker started, people were very excited about it because it gave you a way to put any Linux application or any Linux component inside a lightweight container that could then run anywhere," Golub said.

"Things have evolved so that now what people are looking to do is stitch together multiple components, each of which is in its own container, and run them across large numbers of different servers. So we've gone from one or a few containers on one or a few servers to multiple containers across large numbers of servers. Both these products are designed to address the challenges that you face when trying to make that work."

While all the Docker orchestration services are in alpha, APIs for Docker Machine are available now. All three are due to be generally available in the second quarter of 2015. Docker Hub Enterprise will be available for early access in February.

Golub said work is underway on resource scheduling for Docker that will bring more automation to the management of containers.

"Right now you have the ability to set limits on containers but then if you want to have the capability to automatically scale or migrate if you're reaching some predetermined limit, that's functionality that's coming but not part of these announcements," he said.

"All of this stuff is a matter of months or quarters [away]. This is happening incredibly quickly. The nice thing is there are things that Docker Inc or the Docker project are doing directly and then there are 18,000 projects that are all related to Docker that are trying to fill in the gaps."

The scale of the community's participation is what has helped propel Docker forward so impressively in such a short period, with some 65,000 languages, frameworks and applications already Dockerised in the public repository.

"You do have to accept a certain degree of chaos, I guess. But this is where fully embracing being open and open source helps because the community helps us create APIs, the community helps understand which of these projects are good or bad and then ultimately there's some sort of Darwinian process that helps determine which of these projects are worthy and which aren't," Golub said.

The recent announcement that Microsoft is building support for Docker containers into the next release of Windows Server, due in 2015, will also hugely step up interest in the technology.

"At least half of all enterprise workloads are Windows-based and with the work that we're doing with Microsoft, we're bringing Docker to Windows. To some extent that doubles the universe of people who could take advantage of Docker and it also increases our appeal to the enterprise," Golub said.

"It has also paved the way for us to extend Docker to other architectures. We were a 64-bit Linux architecture in June of this year; now we're an architecture that can work with Windows; we'll be able to work with ARM chips and there were announcements around Solaris and SmartOS as well. So we're really trying to become a universal tool."

The bottom layer of efforts to add that Docker support is going on inside the Windows kernel with low-level container primitive work carried out by Microsoft. The middle-layer Docker daemon efforts are part of the Docker project with contributions from Microsoft, Docker staff and the broader community.

"Then the top level is standard Docker. If we do this right, those bottom two layers will be a technical detail for us and Microsoft and those people in the community who care but everything else will be transparent to end users," Golub said.

"[Availability] will certainly be gated by the release of Windows Server 10 and what I hear from Microsoft is towards the end of 2015 but I can't speak for them. But certainly it's a large undertaking of which Docker is one piece."

Along with the Microsoft relationship, Docker has developed significant alliances, including those with IBM, Red Hat, VMware, Google and Amazon, and others are in the pipeline.

That combination of commercial and community forces has helped make Docker such a powerful phenomenon, according to Golub.

"What Docker did in essence was democratise containers -- make them standard, make them portable and create an ecosystem around them. I hesitate to say that containers existed [before Docker] because there was low-level container technology but it was only usable by sophisticated teams at the largest web companies and it wasn't portable," he said.

"A lot of the solutions that existed before Docker were built back when applications lived a long time and were monolithic and ran on a single server -- and actually all three of those things have changed."

But it is the community behind the technology that enabled it to be so successful and remains core to Docker's future. Golub cites the Docker meet-up groups or communities in over 40 cities across Europe as evidence of that involvement.

"It's very clear that by opening this up and trying to create an ecosystem rather than a single technology or a single company, we were able to get much faster growth," he said.

"Clearly, the project -- and the technology as a whole -- is moving from something that is used primarily at web companies to one that is being used by banks and pharma and manufacturing and governments. So there's a big premium on stability and security and enterprise-grade tools. That's a priority for us to deliver."

More on Docker and containers

Editorial standards