X
Innovation

Here comes the future of application development: Treating infrastructure as code

It's time to fundamentally rethink the way we build and deliver applications, for a world of clouds and containers.
Written by Simon Bisson, Contributor

The way we build applications is changing, fast. Driven by a shift to a continuous delivery, DevOps-powered culture, where infrastructure is as much code as, well, our code, we're moving to a focus on using source code management and build automation tooling in new ways.

It's a fundamental change in how we need to think about applications and infrastructure, using virtualization to abstract what we build away from what we build it on. While we're decoupling hardware and software, we're in a position of now changing where and when in our application development model we put things together.

The folk at configuration management tool vendor Chef talk about "moving left". It's an interesting concept, looking at the application lifecycle model and moving elements earlier and earlier in the process. When we're using cloud services and virtual infrastructures, it pays to start thinking of defining our VMs and our containers at the same time as we write our code. And if we're doing that, then shouldn't we be thinking about storage and about networking at the same time?

Key to this approach is the idea of the immutable container. Containerization is perhaps best thought of as a way of adding more abstraction into our virtual infrastructure, though instead of abstracting virtual infrastructures from the physical, here we're making our applications and services their own abstraction layers. With immutable containers, a Docker or similar container wrapping an application or a service is the end of a build process. Deployment is then simply a matter of unloading the old container, installing the new, and letting your application run.

The immutable container is an ideal model for a microservice world. Wrapping up a node.js service with all its supporting code in a container means not only having a ready-to-roll service, we also have an element that can be delivered as part of an automated scale-out service. As new services are needed, they can be quickly copied from a library of containers, configured using tools like Chef or PowerShell's Desired State Configuration, and then left running as long as they're needed.

It's a theme that's echoed by Amazon at its AWS Summit, where deployment is seen as a way of rolling out a parallel infrastructure, and cutting across from the old to the new (keeping the old offline in case of issues). Once your new infrastructure is stable, all you need to do is delete the old. You could even keep A and B infrastructures in place while conducting A/B testing - using a managed load balancer to route users to the appropriate version in order to get statistically valid results, as well as ensuring that IP addresses always get routed to the version they originally connected to.

With cloud and with virtualization, making infrastructure part of your code makes a lot of sense. Building infrastructure in today's IT world is very different from a decade or so ago. Then you'd have to order servers and routers and disks and all the extraneous components of a rack or two, waiting maybe months to get them delivered, and weeks to have them configured and installed before you could start a deployment (and that's without considering development and test environments). Now you just define your VMs and their services and click deploy, and minutes later you're ready to roll.

Using tools like Chef, your infrastructure definition lives alongside your server configurations. RESTful APIs for cloud services (whether public, private or hybrid) mean you can just deliver JSON definitions, and get the servers and services you need. With tooling to create definitions, you can store them using Git, manage using processes like Gitflow to handle development branches, bug-fixes, and releases, and deploy as part of a Jenkins or Chef-powered build.

Treating infrastructure like code makes a lot of sense - even if you're managing physical, not virtual, devices. You're always going need to configure systems and deploy pre-requisites. Using tools like Chef or Ansible to set up server OS features, and wrapping applications and all their ancillary elements in containers means you can go from bare metal to service in the minimum time, while allowing you to use the same underlying code to manage virtual infrastructures as well.


By treating infrastructure as code and by using containers to wrap applications and services, we're now able to take a complete set of servers and services from a dev server to a rack to a stamp and then from a private to a hybrid to a public cloud - without changing our source control or our continuous development and delivery tools.

It's a world where the old joke about an app that's failed but still works on a development laptop is technically true. Sure, we can put the developer laptop into production, but it's a lot better to just redeploy the virtual infrastructure they're using and the app containers they've built.

Changing how we think about applications and infrastructure to one where everything is code is much more than adopting a DevOps way of working. It's a fundamental shift in the way we build, run, and manage our applications - and a shift that's definitely for the best.

Further reading

Editorial standards