X
Innovation

​Canonical introduces fan networking for containers

When you have thousands of containers on a server, how do you network them all? Canonical, Ubuntu Linux's parent company, has an answer: Fan.
Written by Steven Vaughan-Nichols, Senior Contributing Editor

How many containers can you run on a server? At OpenStack Summit Canonical, Ubuntu's parent company, showed that it could run 536 Linux containers on an Intel server with mere 16GBs of RAM. That's great, but now how do you network them? Canonical thinks it has the answer: Fan Networking.

fan-logo.png
Ubuntu's fan will make networking containers much easier.
Mark Shuttleworth, Canonical and Ubuntu's founder, explained that "one of the real frustrations of the container generation ... is a shortage of easily accessible IP addresses."

Shuttleworth continued, "It seems weird that in this era of virtual everything that a number is hard to come by. The restrictions are real, however, because AWS [Amazon Web Services] restricts artificially the number of IP addresses you can bind to an interface on your VM. You have to buy a bigger VM to get more IP addresses, even if you don't need extra compute."

This is not a new problem. One approach would be to assign each container an IPv6 address. Many data centers still use IPv4 for internal networking. As Shuttleworth observed, "IPv6 is no-where to be seen on the clouds, so addresses are more scarce than they need to be in the first place."

CoreOS created a workaround for container addressing first called rudder, but now named flannel. In flannel, an etcd database of containers and IP addresses and their host mapping is created. This is used to create arbitrary point-to-point tunnels or routes between host machines to enable containers to talk to one another. Canonical claims that fan is a simpler approach in cases where migration is not needed, and where the typical number of overlay addresses is similar for every host in the system.

So how does fan do it? Fundamentally fan is an extension of Linux's network tunnel driver. Dustin Kirkland, Canonical's Ubuntu cloud solutions product manager, explained:

Each container host has a "fan bridge" that enables all of its containers to deterministically map network traffic to any other container on the fan network. I say "deterministically" in that there are no distributed databases, no consensus protocols, and no more overhead than IP-IP tunneling. [A more detailed technical description can be found here.] Quite simply, a /16 network gets mapped on onto an unused /8 network, and container traffic is routed by the host via an IP tunnel.

Fan's net effect for IP addressing is that for each of your existing IP addresses, you get another 250 IP addresses available. Anywhere you have an IP you can make a fan, and every fan will give you 250x IP addresses. Since you can run multiple fans, each IP address could stand in front of thousands of container IP addresses.

It does this by expanding a class B IPv4 range with a class A range. This is done by using the unallocated class A IPV4 ranges, which publish no routes on the Internet.

Shuttleworth plans "to submit an IETF RFC for the fan, for address expansion. It turns out that 'Class E' networking was reserved but never defined, and we'd like to think of that as a new "Expansion" class. There are several class A network addresses reserved for Class E, which won't work on the Internet itself. While you can use the fan with unused class A addresses (and there are several good candidates for use!) it would be much nicer to do this as part of a standard."

Fan, which is an early beta, works with LXC/D and Docker. It is available on Ubuntu on AWS and soon on other clouds. The code at this time is optimized for Ubuntu, but it should work with any Linux.

Fan's goal is "to fit naturally into the networking tools available today and to be easily used from the existing persistent network configuration systems we already have. Similar tools already exist, for example we have brctl for managing Ethernet bridges and ip for managing addresses. These are commonly used directly in /etc/network/interfaces for persistent configuration, such as adding alias addresses to an interface."

The results, Kirkland claims are "Multiple containers, on separate hosts, directly addressable to one another with nothing more than a single network device on each host. Deterministic routes. Blazing fast speeds. No distributed databases. No consensus protocols. Not an SDN [Software Defined Network]. This is just amazing!."

Related Stories:

Editorial standards