X
Business

Why open source will be critical to the future of SDDC

As the Software Defined Data Center (SDDC) model paves the way towards the future of data centers, open-source technologies are at the heart of this technological revolution.
Written by James Sanders, Contributor
sddc-open-source.jpg
Image: iStock

As hardware advances have slowed considerably in the face of Moore's Law becoming obsolete, data center deployments are becoming less focused on how powerful hardware is, and more focused on how best to leverage the power of that hardware with functional software. This transition to the Software Defined Data Center (SDDC) presents a unique opportunity to bundle commodity server hardware with open-source software, freeing organizations from hardware vendors' onerous and expensive service and support contracts -- thereby allowing your IT dollar to stretch further.

The SDDC model allows for a greater level of abstraction, thereby allowing any commodity computing hardware or combination of hardware -- from different (likely competing) vendors -- to work together harmoniously on networking hardware that's similarly abstracted. This in turn allows resource allocation to be performed on demand, with resources programmatically available as required. This commoditized hardware can then be used in conjunction with other computing resources, such as public cloud providers or hybrid cloud deployments, which include data centers in geographically disparate locations. The key to achieving this interoperability is through open-source software.

OpenStack

OpenStack, the leading solution for Infrastructure-as-a-Service (IaaS), is currently being used by various organizations for their own on-premises private cloud, for hybrid cloud deployments, or for offering public cloud services to their clients. Through Nova, the compute module of OpenStack, various other components can be controlled, such as networking, block and object storage, disk imaging, identity management, key management, DNS, and search, among others. The entire deployment can be managed using the Horizon dashboard software.

While OpenStack, itself, does not attempt to emulate the API design of popular public cloud providers, compatibility layers are being developed that provide compatibility with Amazon EC2, Amazon S3, and Google Compute Engine.

Software Defined Networking

Networking has always depended on an adherence to open standards -- if two connected systems attempt to communicate using mutually unintelligible protocols, nothing will be accomplished. While common protocols have solved this issue between data centers, the existence of vendor-specific protocols have, in the past, necessitated a reliance on a single vendor for all of the network equipment in a data center.

With the rise of cloud computing, the traditional hierarchical network design originally intended for client-server computing has become inadequate. Many business applications involve lateral communication between multiple servers, including both on-premises and public cloud systems, before data is presented to the end user -- who may be on a wired workstation, or using a notebook computer, tablet, or smartphone.

Among other tasks, Software Defined Networking (SDN) seeks to reduce networking bottlenecks to better support dynamic workloads. This is done by decoupling network control, where traffic is directed, from the data plane -- the circuitry that forwards traffic to the intended destination.

OpenFlow, an open SDN standard supported by the Open Networking Foundation, is supported by a wide variety of network equipment from various vendors. Open-source implementations of OpenFlow, including Project Floodlight and the Linux Foundation-backed OpenDaylight are also available. A variety of market leaders, such as Cisco, Fujitsu, Intel, NEC, and Red Hat, among many others, contribute to these projects, so the OpenFlow protocol is positioned to be the industry standard for years to come.

Software Defined Storage

For decades, storage management has relied on often exceedingly expensive RAID controllers, which use closed-source firmware that operates in a manner generously described as opaque. While the nature of RAID requires that disks be bought from the same manufacturer, some RAID solutions require all disks to be purchased from the controller vendor -- yet the disks themselves are actually rebadged drives from actual hard disk manufacturers with a substantial markup compared to the spot price of the manufacturer-branded version. In addition, the design of RAID, which dates back to the late 1980s to early 1990s -- when disk drives generally did not exceed 2GB -- has not scaled well to accommodate modern high-capacity disk drives.

The abstraction available in Software Defined Storage (SDS), particularly for open-source solutions such as Ceph, allows storage to break free of the restrictions of storage vendors. For example, a Ceph deployment can be built entirely on commodity hardware without relying on a specific storage solution vendor, although commercial products like Fujitsu's ETERNUS CD10000 utilize Ceph. Importantly, the freedom of Ceph allows deployments to mix and match products from vendors.

As the 2011 floods in Thailand have shown, the fickle nature of thin supply chains can negatively impact data center operations. Organizations like BackBlaze were forced to use creative means to sustain growth in the face of a global drive shortage, but the flexibility of being able to use any drive from any vendor enabled it to continue operations without raising prices for its end users.

Editorial standards