First, what is SCI? Imagine a server: some CPU(s); memory; and storage. There's only so many CPU cycles, so many GB of memory and storage, and the bandwidth between all the pieces is fixed, set in silicon by engineers who have only the faintest idea of what your application may need.
Now imagine a server where all the elements can be configured in seconds through an app?
Doing real time analytics? Load up on CPU cores, high bandwidth interconnect, and lots of fast storage. Searching an archive? Configure as much storage as you need, with just enough CPU and bandwidth to do the job.
Think of it as your own configurable cloud.
WD didn't invent this idea. Intel has been talking about Rack Scale Integration for years, and their PCI was orginally intended for inter-server communication. Liqid is shipping SCI today, along with a PCIe switch and, also announced today, a native NVMe SSD.
But, as Phil Bullinger, senior VP and GM of Data Center Systems at WD, said this morning, the WD embrace of SCI is notable for a couple of reasons. First, they are building on existing open standards, Redfish and Swordfish, being developed for SCI by the DMTF, and SNIA, respectively, and adding their own, open layer, the Kingfish API, that enables the flash and disk pools to be presented as composable resources.
Second, WD today announced its OpenFlex SCI products. They're both storage: the F3000 Fabric device; and the D3000 Series Fabric device. The F3000 is optimized for high performance; the D3000 for capacity.
I didn't mention fabric until now because, while essential, fabrics appear to add a layer of complexity. Conceptually though, they don't, because the fabric replaces several interconnects within the server, such as Quickpath, PCIe, and SATA.
Since the fabric is a network, each SCI element needs to be a node on that network. The F3000 and D3000 are full-fledged nodes on the fabric, thus the Fabric nomenclature.
While the fabric is simpler technically, it can add undesirable latency and cost. The speed and capacity of the fabric, whatever its underlying technology, will be critical to SCI success.
The Storage Bits take
The industry is still in the early stages of the cloud revolution, despite the profound impact cloud has already had mobile services and enterprise data centers. There is no intrinsic reason that cloud vendors should be cheaper than in-house infrastructures, and SCI will be key in reducing the delta that exists today.
Once you have SCI installed, the common problems of overconfiguration and inefficient processing will be managable. As workloads change, so can the infrastructure that runs them. You'll buy the CPU, storage, memory, and interconnect you need, rather than what the vendor has baked into their products.
More importantly, you'll be buying from vendors like WD and others that are used to high volume production - which lowers costs - and lower margins - which lowers prices. Everytime the industry has managed both, new applications have exploded.
That will be good for us all.
Comments welcome. I'm at the annual Flash Memory Summit in Silicon Valley on my own dime. WD paid for my airfare to a WD event in Silicon Valley a couple of months ago.