The next data center dinosaur: Traditional storage

The next data center dinosaur: Traditional storage

Summary: I realized, after speaking with SanDisk, that traditional storage is all but dead technology. What's going to replace it will surprise you.

TOPICS: Storage, Hardware

Sure, I get it, things change slowly in data centers, but there's a rapid change that's sweeping through data centers all over the world: The switch to solid state storage.

Notice that I didn't write, solid state drives, but I did write storage instead. There's a difference and what's about to happen will surprise just about everyone except storage professionals. (Hint: Your spinning disks and disk-like SSDs are going to be extinct soon, so don't be too sentimental or wax nostalgic over them.)

I think just everyone realizes that the move to SSDs is already underway in the data center. Traditional storage is replacing tape. SSDs are replacing traditional disks. And solid state storage will replace SSDs.

OK, I know a lot of you just decided that I'm full of radio tubes, but I'm not. Just as transistors (a solid state technology) replaced vacuum tubes, solid state storage will not only replace traditional storage (arrays of spinning disks), but it will also replace now traditional SSDs.

Let me explain.

As the need for higher performance and more storage capacity increases for storage technology, manufacturers will have to meet those needs with faster, cheaper, larger storage units. Notice again that I didn't write the word drives. That's because what we now know as drive technology and drive footprint is about to become permanently extinct. It's already happening.

If you don't believe me, open your Chromebook and tell me what you see. Do you see a spinning disk? No. Do you see a 2.5-inch SATA or SAS SSD? Nope. What you see is part of the future: You see what's known as a M.2 "SSD". It's a tiny next generation form factor (NGFF) "disk" that holds your Chrome operating system and local files. And yes, it's even newer and cooler than the mini-SATA (mSATA) disks currently in use in some other laptops.

But the M.2, as new and cool as it is, is still not what's going to happen with storage in the longer term.

So, I know at this point you're screaming, "OK, great oracle of future storage, just what is it that's going to be the new storage normal?"

Sorry, as a person who doesn't necessarily appreciate journalistic style, I have a tendency to 'bury the lead' and write in a more novelesque manner, building to the climax rather than giving it all away at the beginning.

OK, OK, here it is: DIMM storage and "on board" storage

Now do you understand why I couldn't write, "disk" when referring to this type of storage? It is storage, but not disk in the traditional sense of the word. It also doesn't connect into the disk subsystem of a computer; it connects into the memory subsystem. Now, isn't that a more logical place for storage? 

If you think about it, what we think of as storage is really just persistent, read-writeable, non-volatile memory. As memory, it should be on the memory bus. This placement means that it's logically closer to the CPU and delivers lower latency for access. SanDisk announced earlier this year, its new ULLtraDIMM SSDs.

And storage seems to be the last great bottleneck of enterprise computing, so decreasing access latency certainly seems the logical path to take.

The only better path would be to integrate storage right into the CPU itself—not as cache, but as actual addressable storage. I'm sure that someone will use the Comments section to inform me that such research is underway or that there are currently products available that do this.

My idea is to have a separate CPU or bank of CPUs that do nothing but manage storage. These CPUs would not be part of the computer's thinking capacity, but only part of the storage I/O story. 

Think about it. Intelligent storage. Forget about blocks. Forget about inodes. Forget about file tables. Just imagine truly intelligent storage where the Storage CPU calculates how best to store files that it passes to "disk" and intelligently processes reads and writes by prioritizing them and providing Storage CPU time to handling those processes.

Is it science fiction? Maybe. But tablet computers* were also science fiction when Captain Kirk used them back in the 1960s too. SSDs are real. M.2 storage is real. And DIMM-based storage is real. It's only a matter of time, in my opinion, when we move to intelligent storage systems.

Future techno-archaeologists will find those old spinning disk-based drives and wonder how we ever managed to do the things we did with them—just as we do now looking back on MFM and RLL disk drives. The good ol' days. Not.

*OK, this might be TMI, but I really, really want a hood-mounted phaser on my car. Saving that, I'd take a grill port that fires photon torpedoes. 23rd century technology to solve 21st century road rage. I love technology.

Related Stories:

Topics: Storage, Hardware


Kenneth 'Ken' Hess is a full-time Windows and Linux system administrator with 20 years of experience with Mac, Linux, UNIX, and Windows systems in large multi-data center environments.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Congratulations

    You just invented the RAM disk.

    You must've invented time travel too because I had one of those my 486-DX33.
    • RAM disks are volatile

      This is non-volatile storage.
      • No kidding

        A ramdisk is only volatile because the underlying memory technology is volatile.

        Volatile or not it amounts to the same thing, putting your file system in memory. Ken does some handwaving about not needing file tables and such but the truth is any long term storage is going to need some type of file system or analogous structure, ergo it is still a RAM disk.
  • Performance is usually not our top issue

    Performance is rarely the top Enterprise issue. What's more important is in hardware cost per storage unit and power usage per storage unit. I think what we'll really see is a return to tape backed HSM now that the tape densities are really taking off and so is data storage volume. Much of this data is only accessed rarely and a tape cartridge uses no power when not being used.
    Buster Friendly
    • Blu-ray

      Mechanical contact damages the media. In addition, tape access is too sequential and thus very slow. Most likely it is going to be optical storage.
      • Either one will work.

        Which has the lower expense and longer lifetime for data is what will determine the winner.
      • Poor density

        We used to use optical platters in the old days too but they have poor density and really aren't any faster because your data may be spread among many. We can get 5TB on current tapes with load and seek times usually well under 30 seconds. The next generation tapes are promising closer to 200TB/cartridge. That's like 4,000 bluray disks in the palm of your hand.
        Buster Friendly
        • We can get 2 TB on current external disks

          and have seek times in a fraction of a second. Using a Linux-based backup system and rotating through a series of external hard drives with 2TB capacity, we have a nightly backup of data (only about 200GB, very modest - relatively small organization). The advantage is a nightly snapshop of what is on the server and we can restore any file or files for any particular date for more than a year. No tape drive required - no expensive tapes. External drive full or just time to send the drive for archival storage? Just get a new portable external drive (now about $120) and put into the rotation.

          It has been several years since I worked for a company that used backup tapes. They were generally reliable. Restoration was usually relatively simple, but usually took several minutes to find the catalog for the tape or to recatalog and perform the actual restore.

          While the server we use is Linux based and could be used the perform the backups, I use a separate PC running Linux to perform the backups over the network. This way, if something catastrophic happens to the server, it might not take the backups with it.

          Bottom line - you have to pick the storage medium, equipment, software, and procedures that will give you the recoverability, flexibility, reliability and the required long-term integrity.
          • You're thinking too small

            We're talking Enterprise storage with thousands of these tapes in silos with offsite rotation. The power consumption of a rows and rows of hard drives would be ridiculous. Also any on-line storage cannot be considered a backup as if anyone hacks into your system, they can just easily delete those files. You only really see that kind of exposure on small scale because you don't have trained risk management people to freak out about it or the billions in potential losses.
            Buster Friendly
  • A fascinating concept!

    With storage components shrinking ever smaller, the day eventually arrives where they become just another module that plugs into the motherboard. You need more storage, you add more modules. They all plug into the same memory bus as RAM, they just have higher addresses and are nonvolatile. The difference between temporary storage (RAM) and permanent storage is just a matter of addressing. If the speed becomes fast enough and the storage density high enough, one would think it could be done, especially if the memory bus itself has its own dedicated CPU along with appropriate firmware. One memory controller to rule it all.
    George Mitchell
  • 64 bit address space limitations, here we come...

    Having been around in the computer field since before 8 bit memory addressing was a standard (yes, we did have 6 bit addressing in some of these computers...), and seen addressing morph thru to 16, 18, 20, 31, 32 and laterly the address d'jour of 64 bit, with each subsequent step lifting the addressability bar, I'm wondering where to next. 64 bit addresses won't be enough!
    • well... the first 128 bit addresses have started.

      Even if it is only used within IPv6.
    • Old

      There was 8 bit when I started, the math co processor was added, then...
  • TL;DR

    "Storage CPU calculates how best to store files that it passes to "disk" and intelligently processes reads and writes by prioritizing them and providing Storage CPU time to handling those processes."

    It already does a lot of this, see Similarly, mechanical disk controllers before that did a lot of prioritization work in addition to mechanics and analog signal processing things, for example, native command queuing.

    The main issue is that the interface between the storage processor and the data processor will always remain slow. Originally, it was parallel ATA, it was replaced with serial ATA, which didn't have to manage time differences between multiple data wires. Now it is parallel memory interface, only now the restrictions on wire length are more severe.

    Still, 150 microsecond read latency is four orders of magnitude slower than access to DRAM (10 nanosecond). To put it in perspective, if memory access was as slow as one second, access to the solid state storage would require four hours. It's like communicating with a base on the Moon vs communicating to Pluto. Or in the days before the telegraph, one day vs 41 years.

    You can't simply plugin solid state storage and expect your in-memory database work just as fast. Undoubtedly, it is faster than today's access to SSD, so there is some progress.
    • Before ATA

      there was history of disks going back to 1956 with IBM RAMAC, many years of I/O development. ST-506, ESDI, ST-512, SASI ...

      ATA was just one of them, and wasn't that good.
  • genius

    Predicting the future in arrears.

    I wish I could do that, you know, maybe predict gasoline cars are old tech and will be replaced by the electric car in the future.

    oh, wait....
  • pot

    i knew people would go haywire when they legalized pot.

    the drive letter is a pita -- and that will go away
    but the "tree" structure of nested folders is natural to the way we file data

    a storage system might take a different approach and replace the structure concept with just a good search tool but i don't think that is the best way to go: a good search tool finds the folders where the data has been kept and often as not it is helpful to browse thru the content of a folder as related Items of Interest are likely to be found by that means.
  • You say that butttt...

    Slow spinning magnetic disks have one serious advantage.


    And solid state disks offer zero gains in reliability. So that snail's pace thing you mentioned? Gonna be that way for a while son.
    • Judging by support cost

      Judging by support contract costs, I would say the solid state devices are 3-4 less reliable.
      Buster Friendly
  • You just invented the mainframe...

    Except IBM have already done most of this - 30+ years ago...

    "My idea is to have a separate CPU or bank of CPUs that do nothing but manage storage. These CPUs would not be part of the computer's thinking capacity, but only part of the storage I/O story."

    Yep, lets call them 'i/o processors' - oh, IBM already did that...
    Ok we'll have some and call them 'storage processors' - oh yeh IBM did those, the 3990 storage controller, and Amdahl too with the 6100 storage processor...

    What goes around comes aroun, all this proves is that Gene Amdahl and his teams that built the System/360 understood the technology issues a hell of a lot better than today's so-called 'engineers'.
    Lord Minty