X
Business

Containers and their role in the Software Defined Data Center

Software Defined Data Centers (SDDCs) are modernizing the data center through virtualization and better resource usage. Find out how SDDCs use containers to bring service functionality to the next level.
Written by Scott Matteson, Contributor
cloud-and-container-unity.jpg

Image: iStock

The ongoing evolution of the data center is a prime example of how technology never stands still. Server rooms used to be filled with an array of various systems (each system generally performed one specific function), and ongoing maintenance/upkeep proved tedious and time-consuming.

Now conglomeration of resources is the new focus for better efficiency and cost savings. Data centers are transforming into more intelligent, robust environments.

This increased intelligence is represented by a concept known as the Software Defined Data Center or SDDC. In essence the SDDC takes advantage of virtualization for better management and operational capabilities. A virtual machine (VM) manager called a hypervisor handles the operation of multiple operating systems on one physical server. But it's not just operating systems that can be virtualized; other traditional data center elements such as storage and networks can also be controlled by an SDDC for improved uptime, provisioning, and fault tolerance/disaster recovery methods. For instance, resource access and utilization can be automated by policy based on demand, and applications can be brought online rapidly to meet customer needs, such as during the busy holiday season or during a site failure.

Instead of a systems-based approach, an SDDC operates in a service-centric realm, whereby the roles once played by dedicated physical servers are played by applications in a shared resource environment. The best part about this arrangement is that it can apply to cloud-based and on-premises data centers, or a combination of the two (known as the hybrid cloud).

Containers are a significant element of an SDDC. Containers are similar to a virtual machine hypervisor or VM, but operate more at the core operating system level -- albeit in a lightweight, stripped-down format that's not actually part of the OS (think trees versus the forest). Applications can then run on these containers.

Why are containers special?

containerssmatteson053116.jpg

Image: Scott Matteson

The diagram to the right compares the traditional computing stack versus the containerized stack. As you can see, containers reside between the operating system and the service/application, and may be just a fraction of the size of a VM. Multiple containers can be deployed on an operating system, and these operate separately from other containers without any overlap or sharing of information.

Startup time for containers is measured in milliseconds or seconds, compared to some operating systems, which take minutes to boot. Legacy and new applications can operate in containers (note: not all older applications will or should work in these scenarios -- some may need to be rebuilt from scratch), and containers can function in development as well as production environments.

You can automate the management of containers for speedier service/application delivery and more resilience. Containers are portable in the sense that they can easily be migrated to other systems or locations. For instance, a developer who has confirmed the functionality of a new application can copy the container to a remote site for customer access, rather than building a new container at that location and then installing the application within it. Containers can also be clustered or chained together for distributed computing to improve service availability and performance.

The functionality of containers goes even deeper, however. Application components can be split into multiple containers, which can then work together to provide the unified application experience; this process can take better advantage of available resources and scale the application more effectively. For instance, if the credit card authorization component of a program requires more horsepower, the container on which it resides can be located on more robust systems.

According to TechBeacon.com, "Companies are adopting containers incredibly quickly. In Q1 of [2015], 451 Research surveyed nearly 1,000 IT decision-makers. We found that 6.3 percent of cloud-using IT shops had containers in initial or broad production, and another 3.9 percent were using containers in developer or test environments. By Q3 2015, that had more than doubled -- 14.1 percent were in initial/broad production, and 8.4 percent were in developer or test arenas....this is one of the fastest-growing technologies 451 Research has ever seen."

Vendors are embracing the concept accordingly. VMware, Google, AWS, HP, IBM, and Docker are just a few of the players on the field, and more are expected to emerge as the concept further develops.

Some container issues

Applications tied heavily to a data store may be difficult to 'containerize' due to complications involving the dependency link between the program and the data. Applications that don't need to scale will only be made more convoluted in a container scenario, and thus receive little benefit.

In addition, a tool is only as good as the person handling it: containers don't magically provide application redundancy or better service response/availability, and their implementation must be mapped out and planned accordingly. This is especially critical for applications split across multiple containers over multiple hosts, since new dependencies such as network connectivity become essential. As the capabilities provided by containers grow, so do the complexities, so developers and administrators must respond accordingly.

Security is another factor. A compromised operating system can render the containers running upon it at risk, and vice versa. Containers often run with full administrator access, which can heighten the impact of an attack. Container security must be taken into account during the design process, and publicly available containers may not offer any guarantee of security, or may be deliberately planted with exploitable code.

Conclusion

While containers are not necessarily a fit for every application or environment, businesses should scrutinize the possible opportunities behind them, the best practices for implementation, and the operational standards and innovative ways with which to use them.

Also see

Editorial standards