X
Business

What is hyperconvergence? Here's how it works and why it matters

Is hyperconverged infrastructure simplifying the way data centers are managed, or is it providing more clever means for certain enterprise technology vendors to regain their strongholds? Our executive guide covers everything you need to know.
Written by Scott Fulton III, Contributor

Executive Summary (TL;DR)

Hyperconvergence is a marketing term referring to a style of data center architecture that trains the attention of IT operators and administrators on the operating conditions of workloads over systems.

The main objective of hyperconverged infrastructure (HCI) has been to simplify the management of data centers by recasting them as transportation systems for software and transactions, rather than networks of processors with storage devices and memory caches dangling from them. The "convergence" that HCI makes feasible in the data center comes from the following:

  • Applications and the servers that host them are managed together, using a single platform that focuses on the health and accessibility of those applications.
  • Compute capacity, file storage, memory, and network connectivity are collected together and managed individually like public utilities. Workloads are treated like customers whose needs must be satisfied, even if it takes the decommissioning and shutdown of hardware to accomplish it.
  • Each workload is packaged within the same class of construct: Usually virtual machines (VM) designed to be hosted by hypervisors such as VMware's ESX and ESXi, Xen, KVM, and Microsoft's Hyper-V. These constructs enable HCI platforms to treat them as essentially equivalent software components, if with different operating requirements.

Services vs servers

Since the dawn of information technology, the key task of computer operators has been to monitor and maintain the health of their machines. At some point, the value of keeping software accessible and functional to users -- especially to customers -- exceeded the cost of extending the lifespan of hardware. The key variables in the cost/benefit analysis equation were flipped, as the functionality of services became more valuable than the reliability of servers.

Although the ideal of hyperconverged infrastructure has always been the radical simplification of workload management in enterprise data centers, in every enterprise whose data centers predate the installation of HCI, the issue of integration has reared its ugly head. Ops managers have insisted that pre-existing workloads co-exist with hyperconverged workloads. On the opposite end of the scale, developers working with the newest containerized technologies such as Docker and Kubernetes have insisted that their distributed, VM-free workloads co-exist with hyperconverged.

Read also: What is Docker and why is it so darn popular?

What's the value proposition?

So the "hyper" part of hyperconvergence typically gets tamped down somewhat. The lack of a single, emergent co-existence strategy for any permutation of HCI has given the major vendors in this space an opening -- not just to establish competitive advantage but to insert specialized hardware and proprietary services into the mix. Architects of open-source data center platforms such as OpenStack cite this response from vendors as the re-emergence of locked-in architectures, in making their case for hybrid cloud architectures as effective alternatives.

The question for many organizations: Is there value in adopting an architecture whose very name suggests the incorporation of everything, when reality dictates it must only be adopted partway and integrated with the rest? Put another way, is there any good to be gained from embracing an ideal that started out as all-inclusive, but which in practice ends up being exclusive after all?

Where is convergence actually taking place?

At its core, hyperconverged infrastructure enables the building up and scaling out of systems using servers as building blocks. Each time you install a new server that includes a given amount of compute capacity, perhaps some storage attached, and a chunk of memory, HCI appropriates its resources and delegates them to their respective pools. A server becomes more like a platter, presenting resources that may be consumed essentially a la carte.

Can hyperconvergence simplify storage?

"The whole point of converging is to 'de-silo' infrastructure, and make it a lot more operationally simple and agile, so you can focus on what's really adding value to the business," explained Krishnan Badrinarayanan, director of product marketing for HCI component provider Nutanix, in an interview with ZDNet. "At the very fundamental level, hyperconvergence adds value at the physical layer, where it converges those units of storage, compute, and networking, so you're able to build these incredibly scalable infrastructure platforms, upon which you rely as your applications grow, and as your business demands grow."

The broader ideal of the so-called software-defined data center (SDDC) is to enable the configuration of systems to be instantly adaptable to the needs of workloads: To let programs define how systems work, rather than hard schematics. Hyperconvergence is not SDDC; rather, the term is used to refer to certain strategies for implementing SDDC that involve the commoditization of data center resources. What typifies HCI is its aim to consolidate all data center management under a new, common model that upholds workloads over parts -- and, in so doing, would seek to replace the most common tools for managing systems today, Data Center Infrastructure Management (DCIM).

Read also: Executive's guide to the Software Defined Data Center (free ebook)

"The goal is to converge all the resources necessary to run applications," argued Nutanix vice president for product marketing Greg Smith, "to converge the skill sets. So I don't need a storage specialist; I don't need a networking specialist; I don't need a virtualization specialist. What you end up with is just a full stack, equivalent to a cloud stack. Basically, I get one infrastructure -- one full stack upon which I can quickly start provisioning my applications."

The relative ease with which public cloud based workloads are provisioned on IaaS platforms such as Amazon AWS' Elastic Compute Service (EC2), argued Smith, revealed to enterprises that their IT departments no longer required an army of specialists with unique, compartmentalized skills. All HCI implementations have this in common: They make an effort to replicate the ease with which public cloud-based workloads are managed, within their customers' own data centers, on their own hardware.

"That's what HCI promises to deliver," said Smith. "So all these things about storage and virtualization -- I think they obscure the larger point, which is that hyperconvergence manages to provide a full infrastructure stack, so that companies can quickly provision applications, without having to methodically design, build, and troubleshoot their infrastructure."

180326-hyperconvergence-01-nutanix.jpg

Nutanix' Acropolis Distributed Storage Fabric model for HCI.

(Image: Nutanix)

The Nutanix model, called Acropolis (for its Acropolis hypervisor), is based around the introduction of a single class of appliance, simply called the node, which assumes the role conventionally played by the server. Although, ideally, a node would provide a multitude of commodities, in an online publication it has dubbed the Nutanix Bible, Nutanix itself admits that its model natively combines just the main two: Compute and storage. HCI appliances from other vendors -- for instance, Cisco HyperFlex -- may combine networking, as well.

Incorporate or replace?

One principal point of contention among HCI vendors is whether a truly converged infrastructure should incorporate a data center's existing storage array, or replace it altogether. Nutanix argues in favor of the more liberal approach: Instituting what it calls a distributed storage fabric (DSF). With this approach, one of the VMs running within each of its HCI nodes is a controller VM dedicated to storage management. Collectively, they oversee a virtualized storage pool that incorporates existing, conventional hard drives with flash memory arrays. Within this pool, Nutanix implements its own system of redundancies and reliability checks that eliminates the need for conventional RAID.

Dell EMC's main approach, by comparison, cannot afford to eliminate the storage area networks and network-attached storage arrays that continue to be the key tenet of the company. In its most recent implementation of HCI nodes, called VxRail, Dell EMC carefully adopts a layer of abstraction that utilizes software-defined storage (SDS), which Chad Dunn, the company's vice president for HCI product management, feels is a principal element of any true HCI platform.

SDS, said Dunn, "is much more flexible and is a scale-out technology. The more nodes you add to your environment, the more storage capacity and the more storage performance you have. And you can scale that proportionately with the compute resources that you're adding. The real key is, it brings it under one management paradigm. I no longer have different organizations and different tools that operate the storage infrastructure, versus the compute, versus the virtualized infrastructure, and increasingly even the networking infrastructure with software-defined networking."

Customers, Dunn believes, are more likely to attach new classes of storage arrays to their existing environments incrementally or in stages, rather than take the plunge and replace their SAN and NAS altogether.

180326-hyperconvergence-02-cisco.jpg

Cisco's HyperFlex HX Data Platform architecture.

(Image: Cisco)

Cisco's HCI model, called HyperFlex (HX), similarly deploys a controller VM on each node, but in such a way that it maintains a persistent connection with VMware's ESXi hypervisor on the physical layer. Here, Cisco emphasizes not only the gains that come from strategic networking, but the opportunities for inserting layers of abstraction that eliminate the dependencies that bind components together and restrict how they interoperate.

This way, for HyperFlex's upcoming 3.0 release, data centers may incorporate a variety of abstract storage and data constructs including one of the most recent permutations, put forth by the Kubernetes orchestrator: Persistent volumes. In a Kubernetes distributed system, individual pieces of code, called microservices, may be scaled up or down in accordance with demand, and that scaling down literally means chunks of code can wink out of existence when not in use. For the data and databases to survive these minor catastrophes, developers have created persistent volumes -- which are not really new data constructs at all. Rather, they're generated by way of layers of abstraction, extending connections to storage volumes to the HCI environment without having to share the details or schematics of those volumes.

"Our upcoming HX 3.0 release has the ability to have the same HyperFlex cluster running VMs and Docker containers managed by a Kubernetes ecosystem," explained Manish Agarwal, director of HyperFlex product management for Cisco. "So there's a separate thread around the Cisco Container Platform, where there's some integration and some management simplification."

When hyperconvergence doesn't

Agarwal described an ideal data center, from Cisco's perspective, where the HCI part of its infrastructure co-exists with systems, both new and old, that host other models of staging applications. Enterprises will continue to utilize public cloud capacity and services, he conceded -- including, Cisco hopes, Google Cloud services, made available to Cisco customers through a partnership announced in October 2017. Google is the premier commercial steward for the Kubernetes project, which may be a major reason why Cisco is emphasizing it now.

But embracing new models like distributed systems and microservices comes with an acknowledgment that hyperconvergence only goes so far. What seemed hyper enough in 2010, hasn't extended itself nearly as fast as the horizons for data center applications.

There are other staging models that HCI cannot easily incorporate, admitted Cisco's Agarwal -- for example, big data environments managed by dedicated operating systems such as Hadoop and Spark. They have their own systems for redundancy, data protection, volume control, and fail-safes. You could encapsulate those systems within virtual machines so they could be compatible with HCI platforms, but there may not be any extra performance or reliability benefits. So why would you want to?

Read also: The future of the future: Spark, big data insights, streaming and deep learning in the cloud | We interrupt this revolution: Apache Spark changes the rules of the game

https://www.zdnet.com/article/the-future-of-the-future-spark-big-data-insights-streaming-and-deep-learning-in-the-cloud/"The main problem that we've tried to focus on is, of course, expanding the footprint for HyperFlex," said Agarwal. The HCI market began, he said, by addressing enterprises' needs for easily managing virtual desktops (VDI) -- employee PCs rendered as virtual machines. Today, however, that market must find a place for itself in an arena where Kubernetes is stealing both the oxygen and the thunder.

"There are going to be these specialized apps which will have specialized architectures," he said. "And it'll be hard for any general-purpose stack to actually be as good as a specialized stack for some of these use cases. But we'll start chipping at the edges, and depending on whether the customer is looking to drive hardware efficiencies and performance, or simplicity -- if the design point is simplicity, then you can envision an ecosystem where, if not 100 percent, a large swath of workloads can be managed by a single stack. But we've been in the market for a little over two years, and the stance we've taken is that we want to co-exist."

There are other critical examples, as Dell EMC's Chad Dunn pointed out: SAP's HANA in-memory database, for example, requires unique conditions for virtualization, which his company and sister company VMware are working jointly with SAP, he said, to bring about.

"We are seeing some things that were previously locked to bare metal, starting to move into virtualization," said Dunn. "At the other end of the spectrum are these born-in-the-cloud, or cloud-native, workloads which represent a relatively small percentage of the workloads inside enterprises today, but we're starting to see more and more of them move in this cloud-native direction. Hyperconverged is an excellent platform for those sorts of workloads." Put another way, Dunn's point is that new applications developed for deployment on cloud platforms, such as Pivotal's Cloud Foundry (from another Dell sister company), may be the best suited for being managed through HCI.

The end goal, for Dell EMC and others, is to essentially own the environment with which enterprise applications are being managed. This may mean defining or redefining "infrastructure" to mean whatever may be best adapted to HCI's needs at the time. For the Dell Technologies companies, that means keeping VMware's vSphere in its current stronghold.

"VxRail is VMware-oriented," explained Dunn. "It's running vSphere and VSAN, and there's [intellectual property] that we create around it to treat it as a system, and scale it out as a system. Our mantra is, we don't want customers to leave the vCenter interface. That's where they need to be able to manage and grow that environment. So more and more, we take features away from our existing interface and we push them into vCenter. We have the luxury of always having vSphere and vCenter available to us as a user interface; not so with the other solutions that are out there."

Cisco's strategy to seize this strongpoint from VMware depends now upon its new and ambitious unified management platform called Intersight. It's a cloud-based platform that intentionally enables hybridized management of HCI on-premises infrastructure with IaaS off-premises -- or, at the risk of straining credulity, to converge two or more convergences.

"Now you can take your entire data center, irrespective of what specific use case you're using it for," said Cisco's Manish Agarwal, "and get a single dashboard for that entire infrastructure. You can have management of your entire physical infrastructure through that dashboard." If a subset of that physical infrastructure is based around HyperFlex, he said, Intersight will be smart enough to recognize that fact and treat its approach to storage differently from that of public cloud-based storage. Specialized assets placed more at "the edge" of the enterprise network may also be managed according to their unique requirements, he added.

"You will have a single management plane for your entire dataset," Agarwal continued, "irrespective of what bare metal applications you're using, whether you're using hyperconverged infrastructure or anything in-between."

Like Dell EMC, whose VxRail and VxRack appliances are built on Dell's PowerEdge servers, Cisco's HyperFlex is built on Cisco's UCS servers, and HPE's SimpliVity around its ProLiant servers. Unquestionably, the leading server makers are steering the hyperconvergence trend to lead through their own brands, which implies that they wouldn't exactly be converging anyone and everything. But even these servers are a means to an end. The grand prize is the single management platform, which holds the same power for the modern era as Windows Server once held during the client/server era.

"There is a reason why we've stuck with UCS hardware only," Agarwal openly admitted. "What we are trying to do is really control the simplicity and the experience that the customer has, in two different dimensions: One is the level of automation that we can do, if we can assume that we are running on UCS hardware. . . The second is, we can control the performance of the infrastructure much better as well. Outside of the experience, there is a quality dimension."

Although Nutanix does produce its own line of HCI nodes, its original product, and its principal product today, is software. Through a partnership with Dell that predates the EMC acquisition, that software continues to power Dell EMC's XC series appliances (which are, of course, also built on PowerEdge servers). Unlike VxRail, Nutanix software is designed to support multiple brands of hypervisors including its own, not just VMware's (Cisco HyperFlex had been based around VMware, but will support Microsoft Hyper-V in its next release).

Choice and consistency

"What we want to do is preserve customer choice," stated Nutanix' Greg Smith, "while giving them a common, consistent operating experience. It is possible to do both; you can enable choice while providing predictability in your data center. What this points to is that HCI is a software market. What customers are asking for is to adopt a single software operating system, that they can deploy on the server, manufacturer, and model of their choice, where they're not locked in. [They say,] 'I would like to run my applications -- virtual or container-based -- on Nutanix. I like how Nutanix provides distributed storage, has a built-in hypervisor, and how it manages compute with application-layer orchestration. But I want to pick my hardware. And maybe I want to pick my hypervisor as well.'"

Hyperconvergence is no longer the class of product it started out to be. Each of its leading practitioners seem to be taking it in its own direction, governed by its marketing strategy and the unique, and perhaps exclusive, strengths of its technology platforms. There are ways to accomplish what HCI initially set out to do, using all types of data center infrastructure, without invoking anything that goes by the name "hyperconvergence" -- for example, Mesosphere's DC/OS, a commercial implementation of Apache Mesos that schedules workloads based on resource availability and currently monitored performance.

But what the vortex of activity around HCI is demonstrating, even if it's not completely converging upon anything in particular yet, is that data center managers have turned their attention away from server performance and toward workload performance. That shift has forced server makers to scramble for value propositions that help them maintain, or perhaps regain, their strongholds in the server room. And the fact that hyperconvergence keeps changing is the clearest indicator that the scrambling may have only just begun.

Learn more from the CBS Interactive Network

Elsewhere

Editorial standards