Solving the problem of server side flash

Today's servers use a fraction of the capabilities of high-end flash storage - but they pay full price for the unused performance. How do we fix that?

High-end flash drives from Micron, Intel, WD, Seagate, Samsung, Toshiba and others are wonders of modern storage. Capable of hundreds of thousands of IOPS at microsecond speeds and gigabytes per second of bandwidth, their performance is as good as some million dollar storage arrays - at a fraction of the price.

Key to maximizing flash performance is to keep the flash close to the CPU, which today means NVMe, a protocol running over PCIe. But that keeps the flash inside the server, rather than out on a SAN.

special feature

The Evolution of Enterprise Storage

How to plan, manage, and optimize enterprise storage to keep up with the data deluge.

Read More

And that performance isn't cheap. High performance flash requires massively parallel I/O channels, large caches, powerful processors and sophisticated firmware to create the magic.

All that costs money. And while flash itself is getting cheaper, the needed engineering and infrastructure isn't. High performance flash is - and will remain - expensive.

Which creates a problem: you want the performance, but you can't afford to put them in every server. Obviously, you want to share the cost across multiple servers to amortize the cost and increase the utilization.

But how? Software and network latency can quickly raise raw single digit microsecond performance into the millisecond range - killing the performance you've paid handsomely for.

The network solution

At the recent Flash Memory Summit, multiple vendors were showing lab demonstrations that showed how the problem might be solved. These demonstrations - not products - included

  • High-speed flash - or in one case even faster phase change memory - block devices.
  • High bandwidth Infiniband or 10G Ethernet networks.
  • NVMe storage interfaces.
  • Remote direct memory access (RDMA).

But the hard part is the storage stack. As researchers at UCSD have documented, I/O software designed for disks becomes the bottleneck with high-performance flash, taking than 80 percent of the access time.

The hardware is ready. When will get the software that will take us to the next performance level?

The Storage Bits take

Expect to see several companies announce software designed to overcome these issues in the coming months. Some will focus on manufacturers, others on end-users.

If you architect high-performance systems, stay tuned. We're still in the early stages of flash adoption, and major advances are coming soon.

Comments welcome, as ever.