X
Tech

SSD write caching for server performance

Non-volatile SSD storage makes it possible for servers to safely cache writes locally for a big performance boost. But what is the best way to do it?
Written by Robin Harris, Contributor

I attended my favorite storage conference last week in Silicon Valley, File And Storage Technology (FAST). This is the foremost gathering of corporate and academic researchers and practitioners in the storage world. I'll be looking at several presentations with significance for storage consumers over the next several days.

First up: Write Policies for Host-side Flash Caches by Leonardo Marmol, Raju Rangaswami and Ming Zhao of Florida International U., Swaminathan Sundararaman and Nisha Talagala of Fusion-io and Ricardo Koller of FIU and VMware.

Why write caching? Write-through caching is safe because all writes are committed to disk before being acknowledged. But write-through is expensive in time and I/O traffic since the server has to wait for the network storage to complete the write.

Write-back caches - where the write is cached locally before being written to disk - reduces I/O latency, improves server performance and uses network storage bandwidth more efficiently. Better write caching is needed because read caching is not the win it used to be. Thanks to large server memory read-only DRAM caches, read-only flash caching isn't a major performance booster on most servers.

If a cache access takes 50µs and a network storage access takes 2ms, then I/O rates double when cache hits go from 95 percent to 99 percent. Doubling server I/O performance by changing caching strategy is a Very Good Thing.

Options for write caching. NAND's non-volatility enables novel write-back cache strategies that preserve data integrity while improving performance. The paper explores two write-back caching strategies, ordered and journaled.

Ordered write-back is the simpler, preserving the original data block updating order when writing to the network storage. Journaled write-back allows coalescing writes in the cache, with atomic journal updates to network storage for recovery if needed.

The authors implemented a Linux test bed with the ext3 file system. They found that both strategies are major improvements over write-through caching. The journaled write-back cache is more complex, but is also usually faster than ordered cache.

The Storage Bits take Using flash only for reads mean ignoring half - or more - of the I/O problem. PCIe NAND flash is expensive compared to disk - and so are servers - so maximizing the economic benefit is a worthy goal.

Industry politics may scuttle this approach though. The network storage vendors don't want servers emitting steady streams of updates: they want bursty traffic that forces over-configuration and larger sales.

The technology is there. What about the will to use it?

Comments welcome, as always. How much would you say a PCIe flash storage card is worth if it would double your system performance?

Editorial standards