Tachyum bets on flash storage to re-architect the cloud data center

Cloud datacenters rely on acres of disk drives to store data, and startup Tachyum aims to change that with an all-flash cloud.
Written by Robin Harris, Contributor

Cloud datacenters rely on acres of disk drives to store data, and startup Tachyum aims to change that with an all-flash cloud. The secret sauce is a combination of transistor physics and advanced data encoding. How will it work?

Tachyum's founder and CEO, Dr. Radoslav Danilak, is an experienced chip designer, architect, and entrepreneur. His earlier startups, SandForce and Skyera, focused on flash storage.

Tachyum includes flash storage in its value proposition, but doesn't stop there. Tachyum is developing a "Cloud Chip" that is optimized for low-power performance, combined with a software layer that enables current applications to run on their new architecture.

Fast transistors, slow wires

You've likely noticed that while transistors continue to get smaller, chip speeds have not improved. Why is that?

Smaller chip feature sizes are great for building fast transistors, but the resistance of the on-chip interconnecting wires increases as they shrink. That makes data harder and slower to move, limiting performance.

Overcoming physics

Tachyum's solution: dramatically decrease data movement by performing operations in storage, not CPU registers. Tachyum's software layer enables compatibility for hyperscale data apps.

Because data movement is reduced, so are power and heat. Tachyum expects to put 100 servers in a 1u rackmount box, using a fraction of the power that x86 servers need.

Much cheaper storage

Another major part of Tachyum's savings comes from using advanced erasure coding to eliminate the standard 3x data copies that hyperscale storage systems typically requires. These erasure codes are widely used today in large scale active archives, but their computational and network requirements make them uneconomic in cloud datacenters.

Tachyum's cloud chip overcomes these problems by including many 100Gb Ethernet links and hardware that accelerates the erasure coding process. Instead of 3 copies of each file, they claim a 1 percent increase in file size with better than RAID 6 data resilience, cutting storage capacity by two thirds - and making all-flash affordable.

The payoff

With massive reductions in power consumption, storage footprint, and server hardware cost, Tachyum expects its cloud chip-based systems to come in at 1/4 the cost of current cloud systems. At the scale the cloud giants are operating, giving Tachyum a fraction of their hardware spend would save them billions annually.

The Storage Bits take

Bravo to Tachyum for architecting a clean sheet design for hyperscale computing. They say thay have an FPGA prototype of their cloud chip today, and they plan to ship their ASIC version next year.

In the meantime they're showing the cloud vendors what they have. Given the economics, I don't doubt that they are getting serious attention.

What I find most interesting though, is their in-storage processing. Scale changes everything, and it may be that our standard von Neumann CPU architectures need overhauling for the age of Big Data.

It may never come to your laptop, but as more and more computing resides in data centers, an approach like Tachyum's is needed to keep scaling the cloud.

Courteous comments welcome, of course.

Editorial standards