Can this 100+ TB cloud SSD controller herald the all-flash data center?

A decade after flash burst on the enterprise scene, all-flash data centers remain the exception. But startup Burlywood plans to rearchitect flash SSDs so even hyperscale data centers can go all flash. Here's how.
Written by Robin Harris, Contributor

High capacity disks have found a home in hyperscale data centers because their $/GB are unbeatable. But because they have about 10-20 IOPS/GB, they have to be deployed in large numbers in support of the three data copies typical in hyperscale storage.

So hyperscale architects have to deploy lots of SSDs to handle metadata, which requires thousands of IOPS per GB. Maintaining a mix of devices, and managing the performance quirks of SSDs, is a headache. But with SSD capacity 8-10x the cost of disk, how will warehouse-sized computers ever be able to afford to go all flash?

Disk versus SSD

All flash array (AFA) vendors have long touted how competitive they are with disk, using a combination of deduplication, compression, and consumer SSDs to make their case. But the scale-out architectures that cloud vendors use haven't yet figured out how to use shared storage. Rack scale computing is trying to figure it out, but the basic problem is that network bandwidth is way more costly than internal server bandwidth.

Thus the problem comes down to building high-performance internal SSDs that are both fast enough and cheap enough to compete with the latest high capacity disk total cost of ownership (TCO). Given that flash uses much less power and cooling, is much lighter weight, and, with 3 bits per cell, greater density, the fight isn't as one sided as it might seem. When your choice is between 1,000,000 disks and 1,000,000 SSDs, those differences add up.

TrueFlash software defined flash

Unlike you and me, cloud vendors can go direct to manufacturers to get their flash and their custom designs built in volume, a fact that Burlywood is counting on. Their product, announced today, is TrueFlash software, a platform they say is the ". . . industry's first modular flash architecture designed specifically for hyperscale datacenters. . . ."

What does that mean? Let's break it down.

  • The controller is programable - using an FPGA - so it can be quickly customized.
  • It can use any NAND flash on the market for extra cost and performance flexibility.
  • Supports SATA, SAS, and NVMe interfaces.
  • Can support multiple and different Flash Translation Layers (FTL) concurrently.
  • The FTLs are sandboxed so they won't interfere with each other.
  • One controller can handle over 100TB of flash with a fraction of the DRAM overhead of most SSDs.

Running multiple FTLs - or none at all - means that cloud vendors can customize FTLs for different workloads: streaming; small block; and, wide I/O for machine learning.

The Storage Bits take

I doubt we'll ever see this technology in consumer products. But if it makes the cloud faster, cheaper, and more efficient, we'll all benefit anyway.

More importantly, the concepts embodied in TrueFlash feel like the first 2nd generation of SSDs. Expect to see other designers come out with their own takes on high capacity, high performance, flash controllers that give sysadmins much greater ability to tune SSDs for their workloads.

Courteous comments welcome, of course.

Related and Previous Coverage:

Intel, Micron's first-ever QLC NAND flash: Cheaper, denser SSD storage is coming

Intel and Micron's first Quad-Level Cell NAND flash memory should bring higher-density SSD storage.

Why SATA flash drives are being left in the dust

The SATA interface on SSDs was a marriage of convenience, not love. Flash SSDs are so fast that SATA simply can't keep up. But NVMe can, and that is changing how how systems are configured and infrastructures are designed.

Data storage and access policies: Here's what you need to think about

Protecting data at rest, in transit, and during processing is key to your organisation's smooth running. Here are some points to consider.

Editorial standards