X
Tech

PCIe fabrics in the data center, but Ethernet strikes back

After decades of multiple incompatible networks and interconnects in the data center, the industry is finally converging on just two: Ethernet and PCIe. Here's what you need to know.
Written by Robin Harris, Contributor


When FusionIO started shipping PCIe SSDs a decade ago, PCIe was confined to backplanes, and the inability to share FIO's high-performance SSD was a major customer complaint. Times have changed.

Today, PCIe is well on its way to becoming the preferred fabric for linking a half-dozen racks or so into a single infrastructure. PCIe is now part of Intel's Rack Scale Integration spec, replacing Ethernet as the preferred interconnect, and is the go to interconnect for the burgeoning field of composable infrastructure from HPE and Liqid.

Also: Best SSDs for 2018 CNET

The latest example, announced this morning at VMworld, has Marvell and Liqid partnering on a native PCIe SSD storage device with a Marvell chip that provides a PCIe switch, hardware RAID, and a dual port PCIe fabric interface. It should be very fast.

Backstory

PCI was originally intended as a server interconnect before it became the ubiquitous server and PC backplane. It's taken 20 years, but now PCIe is living up to PCI's original promise.

Also: SSD prices: how low will they go?

SSDs, whose average performance exceeds entire storage arrays of a decade ago, have driven PCIe's emergence. SSDs made IOPS plentiful and thus made latency the common I/O bottleneck. PCIe has latency measured in nanoseconds.

Ethernet strikes back

Ethernet vendors are fighting PCIe's emergence by enabling Ethernet as a fabric. For example, Marvell, who also makes Ethernet chips, frames its Ethernet fabrics as a:

. . . revolutionary architecture that supports true scalable, high-performance disaggregation of storage from compute by bringing low latency access over the fabric and exposing the entire SSD bandwidth to the network by using a simple, low-power and compute-less Ethernet fabric instead of a traditional PCIe fabric. . . .

That sounds fine, except that many devices are PCIe native, such as SSDs and GPUs, and don't need a Ethernet interface to attach to a PCIe fabric. Mass-produced consumer devices that find business usage continue to shape enterprise infrastructure.

Gen 4 PCIe

Another plus for PCIe is its quickly increasing performance. Gen 4, which is available now in several products, has almost 32GB/sec of bandwidth, which is competitive with 400Gb/sec Ethernet, which is in limited availability.

Gen 5 PCIe is expected to double performance, and the new standard should be approved next year. Expect Gen 5 products in 2020-21.

The Storage Bits take

Marvell is an important tech enabler, so their new chip is no small thing. PCIe fabrics need cost-effective switches and storage to meet customer goals.

It's great to see the competition between Ethernet and PCIe. We'll all benefit from faster and more efficient networks.

Also: Western Digital shakes up data storage

Whether you plan to try composable infrastructure on a half dozen racks or not, if your workloads include AI apps, the ability to direct attach GPUs to the fabric may be decisive. Shared GPUs and fast SSDs on the same fabric is the next evolution of enterprise infrastructure.

Comments welcome!

RELATED AND PREVIOUS COVERAGE:

Rat brain for rent: Smarter AI in hyperscale datacenters

AI is artificial, but it isn't very intelligent. What if you could rent an artificial rat brain, or even better, an artificial human brain that could learn much faster? A startup is working to make that happen within 5 years.

WD's Software Composable Infrastucture: Your own configurable cloud.

Western Digital announces the first devices in its program for Software Composable Infrastructure (SCI). SCI is getting real. Why should you care?

Editorial standards