X
Business

​The dawn of the modular computer

Tomorrow's datacenter will be modular. Are you ready for what that means for how you build and run applications?
Written by Simon Bisson, Contributor
plastic-bricks-legos.jpg
Image: aNdreas Schindl
In Stanley Kubrick's seminal film 2001: A Space Odyssey, Dave Bowman pulls module after module out of the Discovery's HAL-9000 computer - and as he does so, it slowly becomes more and more resource constrained, reverting from its almost human-like personality.

Eventually, as the last module drifts past Bowman in zero gravity, HAL is no more.

Back in the '60s, as Kubrick and Arthur C Clarke wrote the script for their epic movie, computers were still in the early mainframe days. The idea of a highly-distributed, densely-connected datacenter (like HAL) was, well, science fiction. Now, nearly 50 years later, we're living in a world where many of our computing resources are in systems like that fictional computer, and we're starting to think about what comes next.

Fabrics and pools

Talk to anyone working with cloud infrastructures, private or public, and they don't talk about servers or switches or disk arrays. Instead it's a world of fabrics and pools, where commodity hardware is the basis for hyper-scale cost-effective datacenters, and where new open hardware is changing the underlying economics even further.

At the Open Compute Project's summit back in March, vendors were showing x86-based hardware for storage and networking, designed to offer platforms for a software-defined datacenter - based on the work that Facebook, Microsoft, and others are using as the foundations of their public clouds. It's an approach that's making dat centers cheaper to build and easier to manage. The same underlying technologies are behind hardware like Microsoft's Dell-based Cloud Platform System (CPS), which gives you an Azure-consistent service in, if not a box, at least in a rack or four.

Software-defined datacenters are key to delivering the benefits of cloud computing, even in your own premises. Automating infrastructure and virtualizing everything else makes it possible to manage and control resources in a way that wasn't possible with traditional datacenter architectures. Instead of individual servers we're given tools that let us access pools of compute, storage, and networking, that can be configured to support our applications and services.

We no longer need to know what server we're using, we just need to know that we're using four medium-power cores with four threads each and a few gigabytes of memory. If we're using IaaS, perhaps we need to know that we're deploying a particular OS on that set of resources, but in many cases we don't even need to know that much, just that we're deploying pre-configured containers or code.

Our future

So what's next? As networking gets faster, we're going to see a new fabric arriving, one where we'll be able to pool memory and allocate it alongside compute resources. Initially it'll be at a rack level, where high speed connections will allow us to drop memory modules alongside our compute. Today's thin servers will be even thinner, arrays of processors with just enough memory to boot and load a host OS.

Meanwhile application manifests will get more complex, defining your code's memory requirements and scaling options. The old memory boundaries will go away, but what we gain will be balanced by new latency issues that will require developers to think more about how to manage consistency and concurrency in distributed memory pools. Eventually, of course, OSes and frameworks will handle those issues for us, but it's a change that's going to mean very different programming models.

Technologies like containerization are ideal partners for the fabric-oriented datacenter, as they add application abstraction to the separation of hardware and software. By allowing applications to be encapsulated in isolated containers (and easily replicated), it's possible to allocate applications to the appropriate resources, using the container manifest as a tool for describing the required resources. With fast networking linking container APIs, it won't matter where in a datacenter your code is running; all it needs is memory, storage, and CPU.

Microsoft's CPS, and the hardware being developed as part of the Open Compute Project, is the logical next step in delivering the next generation datacenter, bringing hyperscale cloud hardware into your own racks.

Working with next generation management tooling, bringing up a datacenter will be as easy as plugging in power and networking into a rack. Hardware will be self-describing, and 'metal-as-a-service' tools will deliver host OSes as required, and add the server resources to a datacenter-level tool like Kubernetes or Mesos.

Think micro

The microservice technologies we're using to build our new applications are also part of this next generation management environment. They let us break down services into logical component parts, and then distribute them across commodity hardware - as well as taking advantage of the next generation of programmable devices. An SDN tool like Microsoft's Azure load balancer is actually a distributed control plane that delegates functionality to commodity hardware-based switches, programming FPGAs in network hardware to manage packets directly.

Like it or not, the hardware and software that drives the big cloud scale providers is going to end up in our datacenters. The investment they're making is changing the hardware that server vendors are producing, whether they're white-box OEM equipment or branded servers from the likes of HP and Dell. That's a good thing, as we can already take advantage of improved NIC hardware to speed up our datacenters, as well as OS-level storage virtualization and tiering.

As technology continues to migrate from cloud to rack, we're quickly moving to a world where even the smallest datacenters can be software-defined, and built from modular hardware. That means we need to be ready for those changes, and for what they mean for the way we write and manage applications: building code that's just as modular.

Further reading

Editorial standards