Need for speed
Back when 4 or 5 milliseconds was normal access time no one much cared about the latency of the I/O stack. What's a few hundred microseconds between friends?
But as fast, faster and fastest NAND flash SSDs arrived software latency became the long pole in the tent. More importantly, the old stack wasn't designed to handle tens of thousands of I/Os per second so performance would hit a wall.
NVMe is an open standard designed to address many of those problems. Its features include
- A scalable queue interface. Each core can have multiple queues - handy for virtual machines - and separate submission and completion queues with up to 64k outstanding commands.
- Efficient queuing. The queuing workload is split between the host and the NVMe controller.
- Command arbitration. Commands can be assigned different priorities - admin commands are high priority - to help manage service levels.
- Simple, fixed size commands. Three required I/O commands of fixed length means that lower-cost controllers can handle the SSD's high performance.
- Data management hints. Helps controllers optimize data placement.
The 1.6TB Samsung SFF-8639 SSD claims a sequential read speed of 3GB/s and random reads up to 750,000 IOPS. It is warranteed to deliver 7 full drive writes per day for 5 years.
It is available world wide today in Dell's new PowerEdge R920 server. Pricing not announced, but expect it to be close to $1/GB.
The Storage Bits take
The flash performance discontinuity has forced a deep architectural re-think of I/O over the last 10 years. The bottlenecks keep moving.
NVMe is important because it makes servers much more efficient so you can buy fewer of them to get your work done. Thus it makes sense that a lagging Dell would be first to market with a product that will reduce server demand: they need every advantage they can get.
Other vendors will follow, eventually. The flash revolution continues to all customer's benefit.
Comments welcome. Where do you see the greatest benefit of NVMe and even lower-latency I/O?