IDC recently crowned Hewlett-Packard Enterprise the #1 storage vendor, when total storage revenues - external storage systems as well as storage internal to servers - are added up. Five years ago this would have been another example of dumb market analysis. But IDC is on to something.
HP was long the biggest buyer of disk drives, but a huge number went into PCs, hardly a good indicator of enterprise adoption. But HPE doesn't include PCs, so the numbers now mean something.
Back when computing dinosaurs ruled the earth, servers didn't have room for internal storage.
DRAM cost a thousand dollars per megabyte. A 12MB system was a big investment. Disk drives had 12 or 14 inch platters, and sat in their own racks. A $50,000 disk drive had less capacity, bandwidth and IOPS than a $2 thumb drive today. Almost all storage was external, and the advent of fibre channel storage networks in 1997 helped place storage even further away from the CPU.
But the last decade has seen a major shift. DRAM prices have reached new lows - $5/GB - while memory bandwidth and capacity has exploded. 128GB DIMMs, while costly, enable large, high-performance, in-memory databases inside powerful multi-socket, many-core servers.
The advent of flash-based DIMMs that expand usable memory into the multi-terabyte range will pull even more data onto the memory bus. In the short term expect the NVDIMMs to used as block storage, but once the necessary tweaks have been made to operating systems, they will extend memory as well.
See also: Samsung crams 32TB SSD into a 2.5-inch drive, aims for 100TB by 2020 | 'World's largest' SSD revealed as Seagate unveils 60TB monster |Scality: Why object storage has a nice ring to it
New PCIe/NVMe SSDs easily offer more performance than high-end storage arrays did five years ago - and fit in a PCIe slot. More importantly, as capacities in IOPS and bandwidth have grown, the chief bottleneck has shifted to latency.
IOPS - I/Os per second - used to be the limiting factor in storage performance. But SSDs offer tremendous IOPS so the bottleneck has shifted.
Last year, at the Flash Memory Summit, several vendors demo'd PCIe/NVMe latencies in the single digit microsecond range, far below the 1 millisecond number for many SSD-based external arrays. Those were technology demos only, but this year vendors are much closer to shipping products.
There is still a big problem with large memory servers: what happens to data when the server goes down? That's what will continue to drive the network storage market.
But the coming generation of network storage will move beyond today's protocols, FC and Ethernet. Remote direct memory access - RDMA - is the protocol that will drive storage architecture.
Vendors say RDMA adds only 1-2 microseconds to NVMe latency. That's not nothing, percentage-wise, but the ability to share storage across multiple servers is a big plus.
The move from purely network storage to a mix of network and internal storage is a massive shift. Properly architecting distributed systems will become more complex, but the payoff will be enormous.
Why? The cost of bandwidth is a big issue. Internal server bandwidth is cheap, built into the CPU, motherboard, and system backplane.
Connect to a network though, and the cost of interfaces, cabling, switches, and management adds up fast. Balancing the cost advantages of internal storage against the availability advantages of external storage isn't easy.
This also explains why Dell is buying EMC, the world's largest storage company. EMC doesn't have an internal server storage story, and Dell doesn't have a good external storage story. Together, they will displace HP from its #1 perch in enterprise storage.
Courteous comments welcome, of course.