Docker has shot to prominence as a developer tool but version 1.6, available today, shows further evidence of efforts to make life easier for the ops teams that put containers into production.
The latest iteration of the open-source platform that first appeared in early 2013 also offers a rewritten, backwards-compatible registry, an improved engine, and new features for the orchestration technology launched in December at the DockerCon EU conference in Amsterdam.
"Really the reason for Docker's growth over the past two years was that it was super easy for developers to be effective with it. But now that developers have created all these containers, sysadmins are asking, 'OK, how can I manage all these things that are coming at me?'," Docker product SVP Scott Johnston said.
"Over the past year we've been building up features for sysadmins, and this release is no exception."
Docker automates the creation and deployment of apps in containers - a lighter-weight form of virtualisation. The idea is to free developers from software and infrastructure dependencies, cutting costs and creating efficiencies in the process.
As part of the new Docker Engine 1.6 that comes with the latest release, there is an API for JSON and syslog, which is one of the features designed to help operations teams.
"This might sound simple but for a sysadmin who's trying to manage hundreds of nodes and maybe 10 to 100 containers on each node, really understanding the health of those nodes, the health of the containers on the nodes, and what kind of applications are running where is important," Johnston said.
"Logging is just a familiar tool for sysadmins in any environment. Docker had a - I'll call it a 1.0 - version of logs early on but I think that's probably being generous."
Docker 1.6 now has a more sophisticated interface that allows sysadmins to plug in tools that they are likely to have available in their datacentres.
"For example, it ships with a syslog driver but an open interface. So we anticipate the community jumping in and seeing drivers from Loggly and logstash and Splunk and a whole bunch of others. Those haven't shipped yet but the intention is to open that interface up and allow a thousand flowers to bloom and all these other logging vendors to provide their plugins for that," he said.
In the newly-rewritten Registry 2.0, operations teams will also find the Image ID labelling feature, which is designed to enable sysadmins to understand and query what is running where in their environment through the Docker interface.
According to Johnston, a specific use case might be an OpenSSL vulnerability. The developer could have a specific ID associated with OpenSSL, which would allow the sysadmin then to query the entire cluster, identify all the matches and update only those images with the potential vulnerability.
"It allows for very specific, very tactical change management, which obviously, if you don't have to rebuild the entire cluster, is just much less risk for your operations team," he said.
Registry 2.0 represents a major overhaul of the system used to build and distribute images that run in containers but still maintains backwards-compatibility with the first version, according to Docker.
It also offers better performance through improvements in the way uploads and downloads of images, or Docker pulls, are handled.
"The 1.0 version pulled all the layers of that image sequentially. That was good as a starting point and certainly fine for the early days. But what we're finding is that as developers and sysadmins build up stacks, with layers upon layers inside their images, these images have grown large. So a sequentially-downloaded pull has some performance challenges," Johnston said.
"So we've rearchitected the registry to allow parallel pulls of all the different layers that constitute an image. Now in 2.0, when you do Docker pull, under the covers the new protocol will simultaneously start pulling in parallel all the various layers that constitute an image and then reassemble them back on the developer engine host.
"We're going to see significant performance improvements in terms of just being able to grab these images and stand them up and start running them."
Engine 1.6, which also features a Windows client, is now generally available. The Machine, Swarm and Compose orchestration tools remain in beta states but all work with one another.
The Compose tool is aimed at simplifying the process of building a complex distributed app from a number of containers. Its 1.2.0 iteration allows the creation of sub-files, instead of the single .yml flat file, to describe a multi-container application.
"As we scale up in terms of team members as well as the number of containers that constitute the distributed app, having all that in one file can be complex," Johnston said.
"Sub-files allow you to decompose the application into multiple services. What's really neat from a change-management standpoint is any change in the upstream files is automatically detected by the files downstream, so your application is always up to date."
More on Docker and containers
- Red Hat partners with Docker to create Linux/Docker software stack
- Docker reels in Kitematic as it hits container acquisition trail again
- Red Hat buys into Docker containers with Atomic Host
- Docker grabs SocketPlane and its experts to create open networking APIs
- Microsoft makes betas of new Docker orchestration tools available
- Docker Machine, Swarm, Compose: Now the orchestration tools roll out
- Mirantis and Google ease Kubernetes Docker cluster manager onto OpenStack
- Docker container-level stats? Logentries says it has the answer
- Docker 1.5 is out, boasting new features and squashed bugs
- Why Amazon's Docker service is linking into Apache Mesos for simpler clustering