X
Tech

Composable infrastructure: The Next Big Thing in datacenters

The technology wheel is turning again. Yesterday it was converged and hyperconverged infrastructure. Tomorrow it's composable infrastructure. I've been doubtful, but it's here today. This is what you need to know.
Written by Robin Harris, Contributor
dj-panel-setup.jpg
Conta 2014, Getty Images/iStockphoto

What is composable infrastructure?

This is a concept that Intel has been pushing with their Rack Scale Design (RSD) concept, and that HPE has productized with Synergy, and, now, from startup Liqid. The idea is build high-density racks of computes, networks and storage and use software to create virtual servers that have whatever the application needs for optimum performance.

Like physical servers, these virtual physical servers can run OSs, VMs, or Containers. The difference is that if you need more of a resource - CPU, GPU, memory, network, storage - you can get it by changing the server configuration on the fly from a pane of glass.

Or if a virtual physical server dies, move its boot device to a new virtual physical server. Now that's flexible.

Also: HPE buys SimpliVity for $650 million, beefs up hyperconverged system lineup | HPE stumps for composable infrastructure, launches Synergy system | AMD makes its data center move: Will it be EPYC? | TechRepublic: Hyperconverged infrastructure: The smart person's guide

Payback

Current server utilization ranges from 15- to 35 percent because server boxes have fixed amounts of resources, no matter what your applications needs. Double that utilization and the payback is almost instant.

Eventually, with an API on the composing software, an application could request additional resources as needed. That's real-time server reconfiguration on the fly - without human intervention. We're finally seeing a path to the self-managing datacenter.

PCIe switch

The hard part of any "let's hook everything up to the network and virtualize it all" schemes is, of course, the network. They're either too slow or too narrow, and --in either case -- too expensive. The cheapest bandwidth is within servers, which is why so much storage is migrating to inside the server.

But PCIe and PCIe switches are changing all that. PCIe is composable - add more lanes -- fast, very low latency, built into every server -- and every device you might want to use plugs into it already. There are no drivers to install.

There aren't many PCIe switches available today, but Liqid has one that contains a Xeon processor that runs Liqid's software. The switches have 24 PCIe ports in a half-rack box, so you can have a dual-redundant 48 port switch in 1U.

Performance

In the IOPS abundant world of flash storage, latency is the key storage performance metric. Liqid says their switch latency is 150ns. Take any local PCIe I/O, run it through the switch, and add only 150ns of latency.

Then there's bandwidth. This is a Gen3 PCIe switch with up to 96GB/sec of bandwidth. Liqid has several reference designs that offer scale out and scale up options.

The Storage Bits take

Why did Liqid have to build a switch? Why aren't Cisco and Brocade building PCIe switches? There's been a collective blindspot in Silicon Valley around PCIe as a scalable interconnect. (Likewise with Thunderbolt, but that's another blindspot story.)

But the important thing is that PCIe's ubiquity, low latency, and high bandwidth make it the do-everything fabric. And yes, you can run PCIe over copper and glass, the latter for 100m+ distances.

Intel has updated their RSD spec to include PCIe fabric as well. If you want to get a jump on the Next Big Thing, check out RSD, Synergy, and Liqid and start thinking about how it can make your datacenter more efficient - and you more indispensable.

Courteous comments welcome, of course.

Editorial standards