How massive? How about 10x standard server memory capacity?
There is a boom in large memory server products - anywhere from a quarter terabyte to 2 terabytes. Is this the ultimate solution to server performance problems?
Some products Violin Memory is offering a 500 GB DRAM eSATA "memory appliance". Metaram's 16 GB DIMMs build a 16 core server with 256 GB of RAM for under $50k.
The unrelenting hype over flash drives hasn't overcome their huge cost disadvantage and mediocre performance on notebook systems. But Salt Lake City startup, Fusion-io, is delivering a 640 GB PCI-e flash card promising really hot performance for servers.
Then there are the network products. Gear6 has a new, low-end 125,000 IOPS NAS cache box. Texas Memory Systems has a 2 TB flash solid state disk for SAN attach.
What's going on?
Several trends are powering the new products.
- Memory - both flash and DRAM - is really cheap. The price drop makes it economically feasible to increase memory capacity to 16 to 64 GB at the same ratios we had several years ago.
- Disk drives are getting bigger, not faster. Over-provisioning for performance is costing ever more in power, floor space and unused capacity.
- Most importantly, CPU performance - clock rates, instructions per clock cycle, multi-cores - are demanding much more I/O capacity to keep them fed.
There's a rule of thumb that each CPU instruction per second requires one byte of memory. With 4 instructions issued per clock cycle and a quad-core 2 GHz CPU, a Xeon-based server could use 16 GB of RAM. But 256 GB?
Will they sell?
The server products are aimed at applications where performance, not price, is key. Databases, media webservers and realtime data ingest are all good targets.
For DRAM-based products the quality of the battery backup is critical. Flash has performance issues that are costly to fix, but you can trust that your data will be there when you need it.
The Storage Bits take
There is definitely a market for massive memory servers. The only question is how many vendors will that market support? At least one - but how many more?
Network-based caches, like the Gear6 NAS box and the TMS block-based SSD, have the advantage of spreading their cost across more servers. Ultimately, the people with big performance requirements will usually have large infrastructures, so the network-based products should have the market size edge.
Comments welcome, of course. How would you use a 256 GB server? Disclosure: I've done some work in the past for Gear6.