How Docker brought containers mainstream
UPDATE: After publication, the folks from Docker got back to me with some technical corrections and clarifications. You can identify those items in the following because they're preceded by "Docker says..."
Here's the TL;DR version: protect your Docker instances or baaaad things could happen. Very, very bad things.
Let's back up a bit. For those of you not familiar with Docker, here's what you need to know. Docker is like virtual machines in that you can spin up a whole bunch of environments on top of physical hardware. Docker, however, can be more efficient than virtual machines, because each new running environment doesn't require the simulation of an entire machine.
Hang onto that thought and stick with me. For a good overview of Docker, check out this ZDNet article by my buddy Steven J. Vaughan-Nichols.
Docker runs a Docker engine on top of traditional hardware. In this regard, it's something like a VM's hypervisor. With a virtual machine architecture, each new virtual machine is a full machine image, just in software. All the code, for everything you're running, is in each VM. So, for example, if you're running four Linux server VMs, you're going to have four complete copies of Linux, each taking server resources to run.
Docker uses the engine (think hypervisor, even though it's not, really) and containers. The difference with containers is that a container is light. It only contains what's unique about that project. So, for example, if you're running four Apache instances as containers in Docker, you're using a lot less RAM, because you're not duplicating the full Linux stack four times.
Here's the bottom line you need to remember for today's scary story: you can run a lot more containers in Docker per server than you can run VMs. How many? Well, I set out to do a Web search to find that out, and I found another article by Steven with the answer. You can run four to six Docker containers for every traditional VM instance on a given server.
So let's say you've got a server that can run eight VMs. That same server can run 48 Docker containers.
Now, here's where things get serious. Cisco's 2017 Midyear Cybersecurity Report provides an in-depth look at the state of cybersecurity. As you might expect, there's nothing pretty in that document. Along with ongoing trends, Cisco's team looks at emerging problems. Buried on page 53, they make a mention of something a Cisco partner, Rapid7, noticed with Docker.
Before I rattle your world any further, it's important to understand that Docker is a big DevOps tool. Because Docker allows for lightweight deployments, and those deployments can be functionally identical from server to server, developers often spin up Docker containers as part of their develop/test/deploy work cycle.
With that, here's what Rapid7 noticed.
They found that more than 1,000 instances of Docker were "wide open" on the internet. Docker tells me many of the Docker instances were likely abandoned or forgotten test systems. Most were in the US, with China, France, Germany, and the Netherlands rounding out the top five countries hosting these open instances.
Docker told me that "It's important to note that by default, Docker ships with the API exposed only to the localhost. Docker also explicitly states that operators shouldn't expose their Docker API to the outside and if they do, they should configure MTLS."
What makes this more worrisome is that 245 of the instances had more than 4GB of RAM allocated to them. For traditional VMs, that might not be unreasonably huge. But if you think about how small a footprint a piece of malware takes to do some pretty dastardly stuff, you're looking at a huge amount of computing resource available and open.
It's important to keep this in perspective, though. Hackers have been penetrating home and desktop systems, most of which are equipped with more than 4GB of RAM. So why is it that these open Docker instances are so disturbing?
First, let me be clear that Cisco and Rapid7 did not draw these conclusions. They've simply reported on the possibility of a vulnerability. I'm making the intellectual leap from awareness of vulnerability to examination of implications.
What this means and why it worries me
Here are my two biggest concerns. First, Docker instances are intended to be data center-centric systems, not desktop or user-centric systems. Malware has spread rapidly by jumping from user systems into corporate environments, which is why we've seen such damage from advanced persistent threats and other data center-resident invasions.
But with Docker, the intent is for most instances to live in the data center. Rather than having to negotiate or traverse the boundary between user space and server space, such a large vulnerability in data center space can vastly reduce whatever little friction is left, holding back malware penetration into the corporate core.
Second, Docker allows very small containers, which contain entire server configurations, to be fired up and executed. If open instances of Docker are available, malware developers can potentially insert more complex and powerful systems into the data center. Instead of just a rogue process, malware developers can, essentially, install their own rack of malicious servers right inside the data center.
So far, there are relatively few open instances. But since they have the potential of providing a fast lane directly inside protected space, I urge developers and IT managers to investigate and secure all existing Docker instances. Docker told me, "The Docker Bench tool is also designed to identify issues by running in a container and scanning the environment." Be sure to develop a best practice that prevents "wide open" Docker instances from ever being deployed.
You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.