X
Innovation

Containers: Your secure, no-upgrade future in the cloud

Imagine a future with no malware and no workflow-interrupting software upgrades. It's coming.
Written by Jason Perlow, Senior Contributing Writer
happy-container-people.jpg
Image: ZDNet

Based on the feedback I got on my last piece, it's clear that everyone hates upgrading their operating system and applications.

The issue isn't so much that one questions the value of patching an OS to secure it from malware attack or adding new functionality, it's that newer system software tends to obsolete older hardware, so upgrade paths are not guaranteed.

And then there are the apps that frequently have to be dealt with as well.

We get it, you don't want to do this anymore.

There is a solution on the horizon, but it will mean radically changing the way we think about how operating systems and software applications are built in the future.

The technology to enable a future without disruptive upgrade paths -- or one that is the least disruptive as possible -- already exists. It's called containers.

The layperson who doesn't work in or develop software for a datacenter-oriented or a cloud computing environment may not fully understand the concept, but the technology is not new.

In fact, it has been in common use for about 20 years -- on UNIX-based servers.

For on-premises environments containerization runs natively today in Linux using Docker and its derivatives, and will be a built-in feature of Windows Server 2016.

Much of the discussion about containers today centers around reducing the cost and complexity of cloud computing. Service providers and hyperscale public clouds, like Amazon Web Services and Azure are now just starting to deploy it.

Cost reduction through increased server density, agility, modularization, rapid provisioning and better security in the cloud is certainly the major goal of containerization, but that's not the only place where the technology can be applied.

Eventually, container technology will filter down to the operating systems that run on all of your client devices -- your personal computers, your mobile technology, and your IoT appliances.

Why would you want to put containers on your client OS? Or rather, how does containers move us away from the current upgrade/update churn status quo?

Let's begin with explaining the existing status quo. On the device, we tend to think of the operating system as tied to the hardware, or running "on the metal".

At the lowest level there is firmware, which provides a low-level interface for the devices to the operating system kernel which runs device drivers and does resource and process management.

Then on top of that, you have higher level functions and subsystems such as those to support the running of actual applications, user interfaces, as well as file systems and networking.

Obviously, upgrading/updating operating system kernels and subsystems is doable today and is commonplace, but there are a lot of complexities and chains of dependencies involved in the process.

The closer to the metal you get, the higher impact you have potentially to what resides above it.

There are many other variables you need to deal with, including complex regression testing whenever you switch out a software component in an operating system, just for starters.

The way the industry has dealt with this problem, primarily within the datacenter has been virtualization. Although it is already enabled on most 64-bit PC operating systems and PC hardware, most end-users do not use this technology directly.

Virtualization has a lot of advantages in that it abstracts hardware from the operating system. In-between the OS kernel and the firmware, virtualized systems run a hypervisor, which is essentially a low-level intermediary operating system that performs a lot of the hardware-related functions of an OS kernel.

VMware ESX, Microsoft's Hyper-V, and Linux KVM are common hypervisors that run on the Intel x86 platform. There are also numerous hypervisors that can run on the ARM architecture as well.

Hyper-V, by the way, is enabled for free use on every copy of Windows 8.x Professional and above. And you can also turn the OS running on the metal on your PC into a virtual machine, if you want to move it to a new PC, with all the applications preserved as-is.

A virtualized OS, or a Virtual Machine (VM) is essentially a file, or a small collection of files that contain all the information that an OS needs to run, along with the apps. Think of it as a ZIP file you can copy to a USB thumb drive and then move or copy to another computer over the network.

A VM makes operating systems portable, and allows you move OSes with all of the apps contained in them to different hardware over a network.

Basically, your OS is no longer tied to the hardware it lives on. It also introduces a boundary between malware running on one VM and another, so it has inherent security benefits as well.

So hypervisors are great, right? Problem solved, virtualize everything. Well, not so simple.

First, there's a cost to virtualization, in that there is CPU and memory overhead involved. On a server with hundreds of gigabytes of RAM and dozens of processor cores within a large datacenter, that's not as much of a concern, but on a normal end-user PC or on a mobile device where resources are typically a lot more constrained, that's not necessarily practical.

That being said, PCs and mobile hardware platforms have gotten significantly more powerful in the five years since I last proposed we virtualize everything. So we could solve a ton of problems by doing that today, including the issue of making it easier to upgrade OSes on diverse types of Android hardware.

With virtualization technology the "toxic hellstew" can be made a thing of the past.

However, even if the hardware and resource utilization concerns are no longer as much as an issue as they used to be, the main problem virtualization doesn't solve is incremental upgrades of the OS or applications installed on it that may have version-specific and other OS-specific dependencies.

If the OS becomes outdated it still needs to be replaced, and the apps might not run, and an in-place OS upgrade might still fail. You've now lifted and shifted the problem to virtual hardware from physical hardware.

Containerization specifically solves the application lift-and-shift issue. Your operating system can still be virtualized, but now your applications and major operating system components are designed so they each run in their own discrete space, or container.

A container runs on a container host, which is a shared operating systems kernel, which in turn can run on a virtual machine, although it can also run directly on the metal, which is a more likely scenario for something like a smartphone, a tablet or an IoT device.

You can think of this application architecture as the foundational Lego Bricks of a 21st-century computing environment.

Containers can talk to each other over a virtual network, but otherwise they are completely isolated from each other.

So from a security standpoint they have their own resources allocated to them, and if those resources are compromised in any way, you just kill the container in memory, destroy the application storage it may have compromised, restart it from a fresh template, restore the user data, and that malware is destroyed, as if nothing happened.

One such technology that has demonstrated this desired behavior in high-security environments is Bromium, which is a hardware-assisted containerization platform that leverages virtualization technology present in current generation Intel processors.

Using containers, an application or a containerized discrete process within an application behaves as if it is running on its own dedicated operating system. So for example, a web browser or a productivity app could run in its own container, isolated completely from all other OS processes.

You could take this even one step further by isolating a web browser's tabs in their own separate containers as well.

The big improvement over straight virtualization is that containers can be swapped out and moved independently of the base operating system, using automation and package deployment methods such as Docker, which runs today on Linux and soon, Windows.

Because containers are packaged, this lends itself to more of an "app store" approach to software management -- everything is installed and updated from a centralized repository ("repo") remotely and in a modular fashion.

A repo can be publicly managed, such as the iOS and Mac App Store, the Google Play Store in Android, or the new Windows Store in Windows 10, or it can be privately managed, such as within a corporate intranet.

So where are we going with all of this?

From the perspective of the end-user, nothing fundamentally changes. You're going to have apps, you're going to have data, and you're going to have operating systems. But the pain and associated churn of upgrading your applications and migrating data to updated OSes is going to eventually go away. It's going to become mostly a transparent process.

Containers will allow you to swap out important software pieces of your OS just like replacing parts in your car. It's not going to be nearly as disruptive as things are today. Not only that, but you won't be as dependent on local processing as you are today, either.

Containerization will not only enable less painful software upgrades and provide a much stronger security framework, but it will also permit moving more, if not all of your software applications to the cloud itself.

For some end-users and businesses, it will be possible to dispose of all local data storage and processing, leaving the burden of maintenance completely on the cloud and service provider.

Others may choose to store certain data and applications locally, with some portion of it running in the cloud. Overall it will allow far more flexibility and a much safer computing environment than you have now.

Within five years, you'll be able to look back and laugh at the good 'ol days when software and operating systems were tied to each other and involved painful upgrade processes. It sounds like a fantasy, but it's not -- it's the reality of containerization and it represents the future of modern computing.

Are you ready for containers? Talk Back and Let Me Know.

Editorial standards