X
Business

Docker CTO: Why Microsoft's Docker plans for Windows will matter to you

With Microsoft throwing its weight behind Docker containers, the man who started the project, Docker CTO Solomon Hykes, spells out the message behind the Windows move.
Written by Toby Wolpe, Contributor
SolomonHykesDockerA2014Oct220x257
Solomon Hykes: A very strong message to the IT community. Image: Docker

Earlier this month Microsoft revealed plans to build support for Docker containers into the next release of Windows Server, due mid-2015. It's a move of real significance, according to Docker CTO Solomon Hykes — even by the standards of the open-source project's eventful first 20 months.

Hykes began Docker as an internal project at PaaS firm dotCloud, before the toolkit for building distributed applications went open source in March 2013.

By automating the creation and deployment of apps in containers — a lighter-weight form of virtualisation — Docker is designed to free developers from software and infrastructure dependencies, cutting costs and creating efficiencies in the process.

When Microsoft said Windows will do containers and is selecting Docker as the primary interface for managing them, it was a move that marked a distinct shift in the project's already successful trajectory.

"That of course is a very strong message to the IT community — that in the future Docker is not intended to be a sort of a hippy developer tool but a mainstream part of the IT toolbox," Hykes said.

"It's very significant. Of course it was not a product release. It was a statement of commitment by Microsoft. But it was a very strong statement. It's most important for Docker because it confirms what we've been saying, which is that the principle of Docker is to help developers build these applications across any underlying system."

Hykes said Microsoft's statement emphasised that the value of Docker lies in its openness and its ability to create interoperability.

"Its value is in telling a developer, 'Your application may run on a machine at Amazon or a machine in a closet at your employer's, or on your laptop. It may run on Linux, or on Windows. It may run lots of different storage networking infrastructure but Docker will provide you with an abstraction on top of that, so that you can focus on the parts of the application that remain the same," he said.

"We've done that but still under the umbrella of Linux. In reality there are lots of different flavours of Linux and lots of different ways to deploy containers on Linux. We already deal with a lot of interoperability problems and we shield developers from lots of them.

"But if you zoom out, it's still Linux. So now that it's clear that on the horizon there're going to be Windows components and Linux components managed under the joint umbrella of Docker. That makes it very clear that Docker is about more than Linux containers."

Microsoft will have its own container mechanism that will only be accessible to Windows applications but that approach only underlines Docker's capability in running distributed applications.

"When a developer builds an application that is made of lots of different services running on lots of different machines, one of those services might be Window-based and another service might be Linux-based. The resulting application as a whole, across all the machines, across all the different services, can be called cross-platform. It has a Windows piece and a Linux piece," Hykes said.

In an enterprise setting, a typical example of that situation might be an existing legacy line-of-business application, perhaps a .NET service-oriented deployment, connected to a SQL Server database containing core business data.

"That application's been around for a long time and used primarily internally — let's say maybe it's the data back end for retail stores. Now there's a new initiative to ship a mobile application with a cool web interface where customers can log in. Now this enterprise is pushing customers to install the app and do cool new things from their home. But they still need to access the same data," Hykes said.

"The architects for that project have this problem that the application from the point of view of the consumer includes this legacy Windows .NET installation but all the tooling under the web mobile site is Linux, Ruby on Rails — it may be completely different. There's no bridge between the two."

The solution for the architects is not to port the legacy part to Linux or develop the web part in Windows.

"The idea is to say let's use this tool for each job but let's then have a management layer and an abstraction layer on top of that to help us manage it as a whole. That's what Docker has the ambition to help with," he said.

"That's why the result of the Microsoft announcement will not allow a container that runs on Linux to run seamlessly on Windows or vice versa. But that's OK because in the context of distributed applications that's not what developers are asking for."

Hykes said Microsoft's plans are important for Docker's future and represent one of the major themes for the project, as it emerges from its early stages and the need for more tools becomes apparent.

"There is definitely a turning point, a very clear one. This is a really young project and the first year has been characterised by incredible adoption — which took us by surprise — and just catching up with the community that simply appeared around the project and started running with it," he said.

"Although [Docker] still needs a lot of work, it's clear that it's already being deployed around the world to run real applications. That has got IT organisations around the world scrambling to get a better hold of it, to understand it better and make sure they can manage it properly.

"At the same time there's been a focus in the Docker community itself — which includes us as a company but also includes engineers from other companies — to start steering Docker towards better manageability from an IT and operations point of view, dealing with things like scale and security and reliability and monitoring."

The approach that Docker has taken to address the tools shortage is not necessarily to build the software but rather to create networking, storage and security interfaces so third parties can produce compatible utilities.

"It's a fundamental design principle in Docker itself. If you organise everything from the start to be simple and extendable, then in the long run you have a solution that's much more scalable because a much larger group of people overall can solve a wider set of problems in parallel, without forcing you to choose one versus the other," Hykes said.

"The biggest challenge is the more sophisticated the use case, the more fragmented the solutions become. When you start assembling tools for your particular problem, deploying lots of containers and machines to fit your particular business requirements, you need to allocate resources in a certain way, or you have a certain kind of networking equipment that you want to accommodate. It becomes very custom. It's hard to find a single one-size-fits-all to fit everybody."

As a result Docker has focused on allowing composition so fundamental tools can be mixed and matched, and when a customer need inevitably arises in, say, custom resource allocation or monitoring, there are interfaces for extensions.

"You can build your own things or find a third party in the ecosystem that has built a component that you like. You can expand your Docker deployment with that third party without scrapping the whole thing and starting over," Hykes said.

Two areas of Docker need urgent attention based on feedback from the community. One is the orchestration of networks and clusters — anything that has to do with multiple machines and multiple containers. The other big topic is security, which includes authentication, access control and identity.

Docker 1.3, released earlier this month, represents a first step towards improved container security through a tech preview feature allowing the signing of container images.

"The problem when you release the first feature, there are always quirks and little things that you have to iron out. But the stakes are really high on this one because if something goes wrong, you can't run your container anymore," Hykes said.

"The idea is to start testing everything else. Does the interface feel intuitive to the users? Are the explanations clear? Is the general flow of using it correct? We wanted to start testing that without flipping the switch quite yet on denying everything completely if something goes wrong.

"So for now, in this first version, some of the images are verified and the verification is just for information. In other words, even if an image is corrupted or it fails verification, you'll get a message telling you. It will not block you from using it. As a result, you cannot rely on it for security since you're still capable of running a container that's not trusted.

"But at least the mechanism for verification, the infrastructure to deal with all this is out there and people are testing it which means at least we're beginning to get a handle on what's working well and what's not and we can fine-tune. So it's really to start turning the wheel of software improvement."

A good deal of work is underway between Docker and the creators of tools in security, networking and monitoring.

"The focus in general is to present Docker as an end-to-end platform — sort of a substrate that can be always present and lets you glue together these different parts of your application deployments," Hykes said.

"From the point of view of an organisation, when you're producing software or producing these applications, it's all like a conveyor belt in a factory. It's really an assembly line, with developers on the left and ops all the way to the right and a lot of steps in between.

"What we trying to do is giving a view of the entire conveyor belt, end to end, from the first to the last step."

More on Docker and containers

Editorial standards