Red Hat wants you to contain yourself and your workloads

Red Hat's newest push in the virtualization realm is containers. You know, the good old BSD jail-type containers that leverages your hardware better than any other virtualization technology? Yes, that one.

redhatcontainers.jpg

Hit Red Hat's website and you'll see that its newest virtualization technology push is toward containers. Some of us, (yes, I for one) have known for years that containers are the best way to leverage your hardware assets. I realize that containers aren't exactly 'new' to Red Hat because the original announcement was almost exactly a year ago. But what is new is the push of container technology to the front page and now and I'm impressed. You might recall that I've written about Proxmox a few times, discussed it with Jason Perlow, and started to write a book* about it. Proxmox is a hybrid hypervisor that combines both full KVM virtualization and containers into a single very powerful solution. But this post isn't about Proxmox, it's about Red Hat's containers.

Read this

Red Hat gets serious about supporting container-style virtualization

Containers aren't quite virtual machines, but with recent advances in Linux, they can do many of the same jobs as a VM while using far less memory.

Read More

Containers, if you don't already know, are also known as jails, zones, chrooted directories, and operating system-level virtualization. The basic premise is simple: You leverage the running system to create many secure directories that are partitioned off from one another and each partitioned system "believes" that it is a standalone operating system.

The top benefits I.T. professionals see with containers are:

  • Faster app deployment - 54%
  • Reduced effort to deploy apps - 51%
  • Streamlined dev and testing - 38%

The cool part is that containers require no additional overhead or stress on the system. In fact, to the kernel, it's just running applications like any normal server does.

For users and for the contained application, it's a separate and independent world. You can assign IP addresses to containers. Each container can have its own users, including the root user. From the container's point-of-view, it is a fully functional system. You can even reboot it without affecting any other container or the host system.

Fascinating, yes? Yes.

From the Red Hat website:

"Linux® containers keep applications and their runtime components together by combining lightweight application isolation with an image-based deployment method. Containers introduce autonomy for applications by packaging apps with the libraries and other binaries on which they depend. This avoids conflicts between apps that otherwise rely on key components of the underlying host operating system. Containers do not contain a(n) (OS) kernel, which makes them faster and more agile than virtual machines. However, it does mean that all containers on a host must use the same kernel."

The "same kernel" restriction noted above means that you can't install Windows into a Linux container. It just isn't possible. But, you can install any other Linux distribution or flavor that you like into the container as long as it can use the running kernel.

Read this

Red Hat pushes open source cloud with OpenStack distro

Red Hat has launched a community-led version of the OpenStack cloud platform to accelerate adoption of open source cloud.

Read More

Primarily, ISPs use containers to provide virtual private servers (VPSs) to their customers. It is an extremely inexpensive method of providing web or application hosting to a large number of users on a shared system without compromising security between users.

In enterprises, containers offer an economic alternative to traditional virtualization. They also offer the opportunity to develop and to test in the same environment as production.

I'm glad to see that Red Hat has stepped up its game and entered this realm. I think I've written about that in the past--actually calling on Red Hat, VMware, and Microsoft to allow the use of containers in their virtualization solutions. It's refreshing to see this evolution of the offering.

"Red Hat is working with the open source community through Project Atomic to help create industry-wide Linux container standards. Project Atomic helps make sure that common containers work with trusted operating system platforms. By working towards compatibility and coordinating standards, Project Atomic helps Red Hat and other vendors deliver a complete hosting architecture that's modern, reliable, and secure."

This "new" venture by Red Hat impresses me because I've kind of lost interest in Red Hat over the past few years. Not because it isn't a good distribution and platform for enterprise computing, but I think that what's left me cold is that I feel (and maybe it's just me) that Red Hat has become so mainstream that it's no longer innovative or interesting. That said, I'm pleased to see this new trend toward hybrid virtualization. I think it's the right answer for enterprises and it's the right answer for Red Hat. Good job, Red Hat. I tip my hat to you.

*I started a book with Packt Press titled Introduction to Proxmox, but dropped it due to my schedule and a few other personal reasons. Someone should definitely write this book.

Related Stories: