Way back when, in the dark ages of the early 1990s, when I built my first public-facing internet systems, all my servers had names.
The test boxes in my Shepton Mallet data center were named after the latin names for big cats, while the main servers in London were named after great British inventors. Even then, though, they also answered to names that were more descriptions of what they did as much as what they were: www, irc, mx, ns.
As the internet evolved, so did my naming schemes. On one project the servers were photo1, photo2, and so on, with storage1 and storage2 the next tier in a typical n-tier architecture. But they were still named and configured by hand.
Configuring those servers wasn't easy. Which OS versions were they running? What hardware did they have? And why did one box differ from another when they'd been ordered at the same time to the same specification from the same vendor? Looking back, it's a wonder that there were no obvious security breaches, and no serious downtime due to hardware and software failures.
Servers as cows
There's a popular metaphor that compares servers to cows. It goes something like this:
Individual servers are like hand-reared calves. They have names, and are treated like pets, with their little foibles seen as endearing personality traits. I've got a server just like that in my office, customised hardware and a hand-tuned OS.
A business rack of servers is more like a herd of prime pedigree milk cattle. They may not get the same loving care as the pet calf, but they're certainly pampered, with expensive veterinary care and nice warm barns and the best fodder. They'll still have names, but it's more likely they'll be all called "Buttercup".
As we move to a datacentre, servers start getting simpler names, most of which are formed by adding a serial number to a base name for each configuration. We're now using tools like System Center Configuration Manager or Ansible or Chef, automatically deploying configurations as required. Our cows are living on a large farm. We know which one is which, and we see when one drops dead, but we're not really bothered as we've got a spare or two that can go out into the field.
In the cloud, with servers running on a virtual infrastructure, we're now like a rancher, with our cattle roaming a thousand square miles of land. We're not sure just how many cows we have, but we can guess to within a couple of hundred at a time. Every now and then we round them up and count them, refreshing brands and tags, but we're pretty sure we missed a dozen or so -- but it doesn't really matter. Someone will find their skeletons out there in the desert at some point in the future.
From cows to chickens
Scale is key to how we treat our servers. But the way we're building our cloud infrastructures is changing, and with that we're going to have to change the metaphor. It's time to ditch the cows.
Technologies like the new Windows Nano Server, both Docker and rkt containers, and Hyper-V containers promise a new model based around small isolated elements of userland, with the basic services needed to deliver a unit of functionality. Servers built using these technologies aren't the servers we're familiar with, they're now the endpoint of a build, pushing elements of infrastructure as well as our services.
Cows, even on the ranch, do need some looking after. But a barn full of chickens is a very different kettle of chicks. They're pretty much self-maintaining, just needing food. We don't count them -- we can't count them! We just feed them, and eventually we've got chickens producing eggs. Small and efficient, they mature quickly, and hatch in the hundreds and thousands. Our new service-oriented servers aren't cows, they're chickens. If we need more servers, we just build a new barn.
Somewhere down the line there's going to be a bifurcation in how we build and manage infrastructures. Our flocks of server chicks will still be there, but in many cases we're not even going to think about them, building serverless applications on platform-as-a-service environments like Azure Service Fabric and AWS Lambda.
In tomorrow's architectures, someone else will be looking after the chickens; we'll just be buying the eggs.