X
Business

Docker and Linux containers: Red Hat opens up on the issues

Container-type virtualisation, including software from Docker, looks promising but it has still to carve out a role, according to Red Hat platform business chief Jim Totton.
Written by Toby Wolpe, Contributor
JimTottonRedHat2014June220x338
Red Hat's Jim Totton: The very earliest stages of understanding how to use this next technology. Image: Red Hat

Red Hat's platform business vice president Jim Totton believes the current focus on containers could signal the growth of an alternative type of computing architecture — but says it's not yet clear how people will apply the technology.

Elements of container technology have existed in Linux in the form of cgroups since 2006 and in UNIX for decades. Containers sit on top of a single Linux instance and are a lighter-weight form of virtualisation, each capable of running an isolated app on a reduced OS under the control of a resources policy.

Red Hat included core container capabilities in Red Hat Enterprise Linux 6 and has enhanced those features in the recently released version 7, including improved support for technology from startup Docker, with which it has formed a partnership.

"If you look at virtualisation and containers, it's not that one is good and one is bad. They will each bring different benefits," Totton said.

"We've got eight or nine years of history of building virtual guest environments and virtual machines and so forth. Containers and Docker images? We have no history.

"It's the very earliest stages of understanding how to use this next technology. So there's a lot of vision and ideas, but not as much practice as we have with VMs."

Whereas virtual machines contain an entire copy of the OS along with the application, containers include only the runtime libraries needed by the app.

"Containers all run on a single kernel, a single server environment. They all share one rooted operating system whereas a virtual machine, each one has its own entire copy of the operating system," Totton said.

"An entire copy of the operating system isn't floating around with the application. You've got a much lighter-weight image that has just what that application needs. So there's a heavyweight-lightweight distinction that you could begin to draw around just how the image is constructed and what's in it.

"What makes it different to just running an application is that in a container it's isolated from other containers that are being run and you can assign policy for what resources and things are being given to the container."

Policies allow choices to be made about how much computing resources are available to each of the containers and how to balance demands, Totton said.

In March, Red Hat announced certification for containerised applications, with Docker as a primary supported container format. As well as being the name for the format, Docker is the company commercially exploiting the eponymous open-source project.

"How do you create an image that you're going to put inside that container? That's where Docker comes in. It's a tool and an architecture for creating the image that pulls the bits together of your application, various runtime libraries. It's a very flexible approach for how you can create this image," Totton said.

Because of a relative lack of experience of using containers, as opposed to virtual machines, possible applications of the technology remain largely theoretical.

"There's a lot of vision and ideas, not as much practice as we have with VMs. With that as a caveat, the vision is maybe the ability to start to create libraries or marketplaces of Docker images that can be run in containers," Totton said.

"[There is also] the ability for ISVs or enterprises to have a new way of thinking about how they might build their catalogue of applications that you could run in a consumable way.

"An enterprise might build a catalogue of applications within a datacentre. A commercial enterprise might build a marketplace of ways of actually selling and deploying images — these are all vision things that people are talking about of what's possible, maybe, with Docker."

Totton said Red Hat's certification process is designed to address the issue of portability for containerised applications.

"When they build a Docker image and they're going to use RHEL to create it and then move it around, if it's certified, it means we and that partner are going to make a promise to that enterprise of what they can expect," he said.

"A lot of companies will talk about Docker as creating portable images and lots of promises of what could happen. But really to ensure an enterprise experience we have a certification programme for Docker images.

"We have an early-adopter version of that happening right now. We're working with some companies to develop a certification process that will become a formal process. But the idea is if an enterprise wants to consume a portable Docker image, they're going to want to know, 'What can I expect around does it work? Who do I call for support?', and all that kind of stuff."

Another element in Red Hat's approach to containers in Linux is Atomic Host, a new version of RHEL that's currently in the works.

"It's based on RHEL 7 but RHEL 7 all up has about 2,500 pieces to it. If you just want to deploy a server and all you wanted to do is run containers, you don't need all 2,500 packages," Totton said.

"This is a lightweight version of RHEL with just the packages you need to run containers, so that you can instantiate an optimised server for these kinds of things. I think we're at the beginning of another style of computing architecture."

More on Red Hat and Linux

Editorial standards