The various NVDIMMs coming to market all promise to be both cheaper and more capacious than DRAM, with the added benefit of non-volatility and, often, lower power consumption. Over the last decade researchers have been exploring the many facets of how to best use their capabilities.
Finally, the first NVDIMM has come to market. Intel's Optane DC Persistent Memory Module sits on the processor's memory bus, is byte addressable - unlike block-based storage - and is much faster than any SSD.
There's no free lunch though. Optane's latency is higher than DRAM, it requires 2nd gen Cascade Lake Xeon processors, and offers widely different read and write bandwidth.
. . . its max read bandwidth is 33.2 GB/s and scales with thread count, whereas its max write bandwidth is 8.9 GB/s and peaks at only four threads.
Optane operates either as memory or in App Direct mode. Memory mode
. . . uses Optane DC to expand main memory capacity without persistence. It combines a Optane DC PMM with a conventional DRAM DIMM that serves as a direct-mapped cache for the Optane DC PMM. The CPU and operating system simply see a larger pool of main memory.
In App Direct mode Optane appears as separate persistent device. The system installs a special file system that Optane-aware apps can use simple load-store instructions and that ensures crash consistency.
Intel loaned test systems to researchers at UC San Diego loaded with 3TB of 256GB Optane DIMMs, though 128 and 512GB capacities are available. Their first paper includes performance data for Optane as main memory, persistent storage, and, perhaps the most intresting, persistent memory.
The team tested Optane with Memcache and Redis workloads, and found that performance dropped between 8.6 and 19.2 percent, using a DRAM cache. This sounds wrong, but remember, the server can now support 3TB of main memory, instead of only 192GB DRAM, vastly reducing storage traffic.
As storage, the DRAM cache is disabled, and the system sees a persistent, block-based, storage device. Linux supports several file systems, such as Ext4 and XFS, that can directly access Optane as a block device, as well as NOVA, a file system designed for persistent memory.
The team tested several workloads, including SQLite, MySQL, RocksDB, and MongoDB. Compared to running on an SATA SSD, they found performance gains from 50 percent to 20x.
The team modified versions of Redis and RocksDB to map into their address space to access directly with loads and stores. RocksDB performance increased 3.5x, while Redis achieved a modest 20 percent gain.
The Storage Bits take
These are very early days for NVDIMMs, but this research points to the possible performance gains from using high-density memory and storage sitting on the server memory bus. As with SSDs, there will be, no doubt, some serious application tuning required to take full advantage of Optane performance.
But there are two more interesting threads to follow. Intel isn't supporting Optane on non-Intel processors and it's Xeon only. Which means that, for most of us, Optane is a non-event.
Second, Optane isn't the only game in town. Nantero's carbon nanotube (CNT) promises to be faster and less costly than Optane. Arm has licensed two other NVRAM technologies as well.
Each of these competing technologies will have different performance envelopes and power requirements. NVDIMM vendors may end up battling for developer resources, based on application performance and market demand.
The cloud companies are likely to be the first with large scale Optane deployments. I'm told Optane has been in volume production for some months, and who else could quietly soak up a lot of Optane volume?
Bottom line: the NVRAM/NVDIMM revolution is just starting. It promises to be a wild ride.
Courteous comments welcome, of course.