Even though object storage has been the fastest growing part of storage for the last decade, block storage still rules in many datacenters. Vendors have been searching for the key to scale-out economy and data sharing to make enterprise block storage competitive with cloud storage. Excelero may have found the key.
Here's the problem: All flash arrays (AFA) are fast and offer many services, but they're much more expensive than NVMe SSDs, many of which offer similar I/O performance. But if you install NVMe SSDs in every server, that performance and capacity is marooned in an individual server.
Wouldn't it be great if you could share that performance and capacity across all your enterprise apps? It would. And Excelero may have figured it out with their NVMesh software.
I spoke to CTO Yaniv Romem last month to learn more about what Excelero has done. In a nutshell, Excelero's Server SAN offers software defined block storage on standard servers. Their software enables sharing of all internal server storage - including high performance flash - over the network.
This means that enterprises can capture the cost advantages of NVMe/PCIe SSDs without giving up the application advantages of shared data, plus the operational advantages of data redundancy across the network.
The sharing economy
Server-based storage is much less expensive than network-based storage because network bandwidth and infrastructure is costly. While AFA offers important services, such as deduplication and compression, there's no architectural reason those services can't be provided at a higher level. It is these services that limit AFA performance, making individual NVMe/PCIe SSDs competitive in performance with much costlier AFA.
The secret sauce
In a world of abundant IOPS, latency is the key to high performance. So how does Excelero achieve low latency across a network?
The key is a new type of network interface card, called an RNIC, that enables remote storage access without server CPU involvement. Update: RNICs are
in beta today and will be available this summer today. End Update. The NVMe/PCIe SSD is memory-mapped to the RNIC to eliminate the need for CPU interrupts.
RNICs offer near-local latency with 5 microsecond overhead compared to local access. With 25Gb/s Ethernet here today, and 100 GB/s Ethernet arriving soon, it is feasible to have 200Gb/s - about 20GB/s - of bandwidth to NVMe/PCIe SSDs.
The Storage Bits take
Excelero's product wouldn't have made sense with disk drives, because their low performance meant that they had to be placed in arrays. NVMe/PCIe SSDs are a different story, with individual SSDs offering hundreds of thousands of IOPS.
It is the cost of stranded SSDs - whose utilization might only be 15-20 percent of their potential - keeps AFA in the game. The advent of RNICs and near-local performance across a network means that SSDs can be shared. That is an economic game-changer for data centers struggling to compete with cloud providers.
Data storage in synthetic DNA reaches new landmark but remains costly
Courteous comments welcome, of course.