X
Business

Cloud Native Computing Foundation adopts Kubernetes-friendly container runtime

Red Hat's Container Runtime Interface -- Orchestrator (CRI-O) -- is now a CNCF incubation level project. As such, it may soon challenge Docker as the top container runtime.
Written by Steven Vaughan-Nichols, Senior Contributing Editor

A few years ago, Docker made containers popular. With the rise of Kubernetes container orchestrationCloud Native Computing Foundation's (CNCF) newly adopted open-source Container Runtime Interface -- Orchestrator (CRI-O) runtime -- CRI-O may rise to the top of container deployments.

That's because to run containers at scale you need an orchestration program. By the end of 2017, Kubernetes has become the most popular container orchestrator.

You can, of course user Docker to run containers under Kubernetes. Indeed, Docker is still Kubernetes' default container runtime. But the lightweight CRI-O runtime works hand-in-API-glove with Kubernetes.

CRI-O has the following features:

  • Storage: The github.com/containers/storage library is used for managing layers and creating root file-systems for the containers in a pod: OverlayFS, devicemapper, AUFS and btrfs are implemented, with OverlayFS as the default driver.
  • Container images: The github.com/containers/image library is used for pulling images from registries. Currently, it supports Docker schema 2/version 1 as well as schema 2/version 2. It also passes all Docker and Kubernetes tests.
  • Networking: The Container Network Interface (CNI) is used for setting up networking for the pods. Various CNI plugins such as Flannel, Weave, Cilium and OpenShift-SDN have been tested with CRI-O and are working as expected.
  • Monitoring: github.com/containers/conmon is a utility within CRI-O that is used to monitor the containers, handle logging from the container process, serve attach clients and detecting and reporting Out Of Memory (OOM) situations.
  • Security: Container security separation policies are provided by a series of tools including SELinux, Capabilities, seccomp, and other security separation policies as specified in the OCI Specification.

CRI began as an API to define calls to container runtimes. This made it possible for people to make Kubernetes-friendly, lightweight container runtime programs. CRI-O was the first Kubernetes CRI-compatible container runtime. It was created by Google and Red Hat, with help from Intel, SUSE, and IBM. CRI-O has gotten quite popular.

In part, said Brendan Burns, Kubernetes cofounder, that's because "A founding principal of CRI-O was to 'not reinvent the wheel' but to use shared components and refine approaches tested in production, and existing, battle tested code. As CRI-O is specifically tailored for Kubernetes, it is tuned for performance, stability, compatibility, and adherence to standards, particularly the Kubernetes Conformance tests. CRI-O is a building block of any Kubernetes cluster, and facilitates the life cycle of containers as required by the Kubernetes CRI."

So does that mean CRI-O will replace Docker? Well, yes and no.

As Antonio Murdaca, a Red Hat senior engineer and CRI-O maintainer, explained, "Is CRI-O going to replace Docker? Nope, or well, it's meant as a Kubernetes focused runtime, so it replaces Docker in the context of Kubernetes. It won't replace Docker as the developer tool we're all used to. CRI-O does not implement the Docker Engine API or the Docker CLI. This means you can not use the Docker CLI to talk to a CRI-O daemon. You have to go through Kubernetes."

Still, it is going to give Docker competition. As Chris Aniszczyk, CNCF CTO, wrote, "CNCF hosts a variety of container runtimes and we're excited to have CRI-O join them as an incubation level project. Choice and competition benefit end users."

Cloud services: 24 lesser-known web services your business needs to try

Related Stories:

Editorial standards