Will 2011 bring an enterprise SSD adoption breakthrough?

Will 2011 bring an enterprise SSD adoption breakthrough?

Summary: As I've written numerous times here in this blog, storage consolidation plays a vital role in the green IT movement, and solid state storage is often held up as one potential actor that could play a leading role here. That's mainly because of its size and associated energy consumption/reduction patterns.

SHARE:
TOPICS: Hardware, Storage
10

As I've written numerous times here in this blog, storage consolidation plays a vital role in the green IT movement, and solid state storage is often held up as one potential actor that could play a leading role here. That's mainly because of its size and associated energy consumption/reduction patterns. The drawbacks of solid state drives (SSDs) have been, of course, not just the cost associated with them, but the way that operating system software addresses them. Its the age old problem of complex storage management.

A relatively new-ish start-up called WhipTail Tech (hailing from Summit, N.J.) is trying to address this concern. Its answer is the Racerunner operating system, for its SSD-inspired Virtual Desktop and Datacenter XLR8r appliances. (Those products start at $49,000 for 1.5 terabytes of capacity.)

When I spoke with two executives from WhipTail Tech last year, they noted that Racerunner supports faster write operations on its SSDs. It also lays the data blocks differently than they would be handled on other SSD technology, writing into each cell one before writing into other cells. This makes SSDs more logically addressable than has been previously possible. Racerunner also supports in-line data deduplication and compression.

So, why is this green? Because WhipTail Tech contends that one of its Datacenter XLR8r appliances can replace six fully loaded racks of hard drives. That's a substantial savings in terms of energy consumption, not just to run the storage devices themselves but to keep them cool.

Maybe it is best to let a WhipTail customer do the talking. One of the company's accounts is Finkelstein & Partners, which is using a WhipTail 1.5 terabyte Virtual Desktop XLR8r Hybrid configuration. The company has able to get the WhipTail technology for the same cost as a 14 terabyte shelf that the law firm had been considering. In the press release, the company quotes the firm's senior system administration talking about why he chose the WhipTail option:

"For every virtual machine that we house, we need to allocate a specific amount of disk. If we have a virtual machine with 40 gigabytes of space, we need to have that 40 gigabytes of space available on our SAN. That said, multiply by, say 50 virtual machines, you're up to 2 terabytes of storage. With the XLR8r appliance and deduplication, we were able to mitigate that to a major degree. We're talking going from 2 terabytes of traditional storage to about a half terabyte on XLR8r [running all the same stuff] because of the deduplication and compression ratio we're getting with XLR8r."

Here's the complete case study.

As I mentioned right at the beginning of this post, while some enterprise businesses are considering SSDs as a green IT strategy, the cost and management issues associated with them have become a dealbreaker. WhipTail is trying to convince companies to take a deeper look at SSDs as an option, which could help inspire broader adoption across the entire enterprise SSD space. My gut is that 2011 will bring much more activity, not just because of innovators like WhipTail but because Hitachi is going to make a play for this space with technology it developed in conjunction with Intel. That technology was recently shown off at the Consumer Electronics Show, and it is supposed to ship in volume in the first half of 2011. Hitachi's products will compete head to head with products from Sandforce and Toshiba, and they could help open up this market.

Topics: Hardware, Storage

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

10 comments
Log in or register to join the discussion
  • Mechanical drives need to go now...

    My wish is to see all of the mechanical drivers go to
    the trash can.

    Why this technology was not implemented 5 years ago,
    I will never know.

    I love it when people say 'disk space is cheap'...

    Well, mechanical hard_drives are all the same they
    start out fast and degrade with time.

    I would rather have older harder with SSD...
    open_source_01
    • RE: Will 2011 bring an enterprise SSD adoption breakthrough?

      @open_source_01

      The only thing I'm concerned about is, don't SSDs only have so many Read/Write cycles similar to flash drives?
      The one and only, Cylon Centurion
      • Unlimited reads, MLCs have 10,000 writes per block, SLCs have 100,000 write

        @Cylon Centurion 0005 <br>May not seem like much, but SSDs have smarts to spread the wear around and usually spare capacity to use if some blocks become unwritable.<br><br>Note that data is never lost, just new data cannot be written to exhausted blocks.<br><br>For consumers, MLC SSDs will probably never reach write-cycle exhaustion on any blocks.<br><br>However, enterprise servers, especially for databases, may have large amounts of writes, and so tend to use SLC SSDs with extra spare blocks.
        Patanjali
  • I hope so

    Adoption needs to increase so the prices can decrease.
    The one and only, Cylon Centurion
  • Cons and Pros

    Here are my concerns about SSDs, especially enterprise-class arrays:

    They cost 10x as much. Even with 400% better storage efficiency through data deduplication and compression (technologies that are also available for traditional "spinning disk"), the cost is approximately $8000 per terabyte or $8 per gigabyte.

    Cylon Centurion has already brought up the issues of write cycles (you can read an SSD roughly forever, but you can only write to a particular segment of data so many times before it becomes read-only).

    SSDs are too new to have any factual, usage-based data to back up their MTBF - Mean Time Between Failures numbers. We have an idea of how long they are going to last, but no hard facts. I'd like the technology to have a little history before I'm going to bat to spend 10x as much as competing technology.

    What happens when an SSD fails? For example, with a standard RAID5, RAID6 or RAID10 array, I switch to a hot-swap drive (and deal with some performance degradation) or swap out the bad drive with a cheap replacement. How much is it going to cost when an SSD fails?

    It sounds like I'm dead set against the technology, but nothing could be further from the truth.

    I yearn for the day when all storage, temporary (ie: RAM) and persistent (ie: disk), are merged into a single entity and a single technology. It would increase scale and lower costs.

    It would be wicked fast. There would be no need to read data from disk and write to memory... it would already be there, effectively reducing persistent storage I/O to zero.

    Not only could persistent storage be virtualized and dynamically reprovisioned among different systems, RAM could be virtualized and dynamically reprovisioned among different systems. Think of it as a RAM SAN instead of a DISK SAN.

    But the technology isn't there yet.
    Marc Jellinek
    • RE: Will 2011 bring an enterprise SSD adoption breakthrough?

      @Marc Jellinek I don't think such type of notes all but <a href="http://ma???igrir-b???eau.com">phenter???mine</a> are at all <a href="http://fra???nce-phar???ma.com/ach???at-pril???igy-onli???ne-france.html">pr???iligy</a> enjoyable. You'd rather aim headed for be critical since <a href="http://fig???htext???rafat.com/">phent???ermine</a> is the one you noticed.
      jkaqkgojgw
  • WhipTail Response

    @cyclon - yes, the number of writes to flash drives is limited. WhipTail does two major optimizations to manage how writes are committed to the flash which eliminates concern over both the write performance and the endurance of the drives.

    First we use a tiny buffer to align the writes into the native write block size of MLC flash - this enables us to actually write FASTER than we read (250k IOPS/200k IOPS respectively).

    Second, we linearize the writes across the array to minimize how often a write is committed to a certain cell. There is a lot more to this (algorithms, defrag processes, etc.), but the gist is that due to the above two major optimizations, WhipTail is able to A) utilize MLC-based flash to reduce cost, B) guarantee the drives will last 7+ year even under a full, daily write workload, and C) can write faster than we can read.

    And to clarify Patanjali's response - where our intellectual property via the Racerunner Operating System excels is executing these optimizations at the array level across all drives - which eliminate the need for overpriced SLC based SSD drives.

    @ Marc - valid points. Regarding cost, based on volume of the order, WhipTail's MSRP on our 12TB units get down to $16/GB - so, while still more expensive than 15k HDD/GB, it's a far cry from the $100/GB that people are used to seeing out there ? and remember, this is for a fully-functioning SAN/NAS appliance, not just drives.

    Also, $$/GB is not the right cost metric for SSD. When you are evaluating SSD, it?s cost/IOPS. Where traditional storage arrays cost around $6-$8/IOPS, WhipTail is as low as $0.19/IOPS. So, for example, if you're looking to scale 5,000 Virtual Desktops where you will need 100,000 - 300,000 IOPS just to get off the ground, you are looking at SSDs being 10% of the cost of traditional storage with 95% less power, cooling, and rack space.

    Ditto for database workloads, and deduplicating/accelerating Cloud environments.

    So, is SSD ready to completely replace HDD across the entire datacenter? No, but wherever you are overprovisioning traditional storage for performance gains, it's time to make the leap to SSDs. And when you look at cost, endurance, and write performance ? WhipTail is the only company to overcome all three of those obstacles.

    And regarding track record, WhipTail has been powering high-performance, mission-critical workloads in Fortune 500 datacenters for over three years. Our customers are more than willing to be a reference.

    The technology is not only there, it's here -- the only question is "how important is instantaneous storage performance to the competitive advantage of your business?"

    After you answer that question, the TCO, endurance concerns, and write performance concerns can be worked out quickly.

    Feel free to let me know if you have any further questions.

    Ryan Snell (rsnell@whiptailtech.com)
    rsnell
    • Response to WhipTail

      @rsnell

      Thank you for the detailed response. Right now I see SSD as a point solution to provide a performance boost where a system is bound by disk I/O.

      I do take exception to the statement "$$/GB is not the right cost metric for SSS... it's cost/IOPS".

      I'll mark it up to your enthusiasm for your companies product. I'd find the following statement more acceptable:

      "$$/GB is not the <u>only</u> cost metric for SSD, you should also take $$/IOPS into consideration. The costs of increasing IOPS by spanning across multiple spinning disks may be offset by the higher IOPS rating of SSD media".

      My suggestion is a bit dense... but you get the idea. Your example of 5000 virtual desktops (is anyone actually doing that?) is a pretty good illustration.

      A better example would be something that companies need right now. For example, a data warehouse may reach its performance goals by striping data across large numbers of small 15K SAS or FC drives (also very expensive), which require large numbers of drive shelves to accommodate them.

      Compare this against the same volume of data stored on your storage devices. The device count may go way down because you don't have to throw large numbers of spindles at the problem, and don't have to manage the large number of files/filegroups (to use SQL Server-oriented terminology) when delivering the solution.

      Your write performance should blow away what's available when writing to spinning disk, winning you the gratitude of the DBAs and your read performance would make you a hero to the end-users.

      It should be an easy win with immediate benefits to the customer.

      This would be a low-risk proof of concept that you could throw at your customers. There are many companies with a data warehouse that isn't fast enough to satisfy end-users. Compare that with the number of companies with 5000 Virtual Desktops.

      My concern with this scenario is the number of write cycles. Data warehouse wipe-loads become very appealing with your write performance, but I'd be concerned about degradation after a number of full wipes (one large set of writes) and reloads (another large set of writes).

      My statement "But the technology isn't there yet" was in reference to using silicon for both RAM and persistent storage; not for using SSDs for persistent storage only.

      SSDs are a fantastic techology for persistent storage, I just haven't reached a comfort level with them to recommend them as anything other than a point solution to overcoming storage I/O bottlenecks on the datacenter/server side or power/durability side of portable devices.

      I look forward to the day when they are appropriate for more general use.
      Marc Jellinek
  • RE: Will 2011 bring an enterprise SSD adoption breakthrough?

    How would SSD Enterprise Drives perform from an endurance and data retention in a typical Daily Tape Storage Server Environment (Example; Customer doing daily/weekly saves on a tape drive)?
    LG2310
    • RE: Will 2011 bring an enterprise SSD adoption breakthrough?

      @LG2310

      Endurance: no one really knows. An LT04 tape lasts roughly 10 years or so. SSDs haven't been around long enough to compare. Theoretically, they should last a long time because the problems identified turn it from a read/write device into a read-only device. I haven't heard anything about read degradation.

      Cost: 200 times more expensive. An LT04 tape will store 800 GB (uncompressed) and costs approximately $30 (approximately $.04 per GB). An SSD will cost $8 per GB.

      When I do backups (filesystem and database), I write the backup files to spinning disk, then back up the resulting files to tape. This speeds up the backup process and effectively uses the disk for a cache. The limit to the speed usually isn't the spinning disk, its the network connecting the server I'm backing up to the disk I'm using to cache the backups. Replacing the spinning disk with SSD wouldn't improve anything unless I removed the network (installed SSDs directly in the servers and did the initial backup to that disk).

      The 200x cost penalty would be prohibitive if I used one SSD... but to use hundreds would be insanely expensive.
      Marc Jellinek