X
Tech

Flash memory heads into the storage mainstream

In the days when it had a low capacity and a short shelf-life, flash memory was a niche product - but now it is growing much faster than expected.
Written by Colin Barker, Contributor

Flash is often seen as a good half-way house between RAM and ROM.

While RAM forgets everything stored when the power is switched off, ROM remembers everything after the power is turned off.

Flash will remember anything you put in it, regardless of whether it is powered or not, but with the handy proviso that if you want the memory erased you don't have to physically erase it. That is a handy trait to have but flash has another extremely useful property: as the name suggests, flash is fast. This comes in useful when your servers want to move information around.
1395032668valdisfilks.jpg
Gartner's Filks: "Between 2013 and 2014 the sales of flash more than doubled."
Photo: Gartner

These days the world of large scale systems programming has caught on to just how useful flash can be and along with the interest from major vendors - IBM, HP, EMC, NetApp and so forth - a bunch of suppliers have sprung up to offer innovative, flash-based solutions.

To find out just how useful flash might be we first talked to Valdis Filks, director at analyst Gartner, to find their take on flash, and when you should and, perhaps more importantly, should not look at the famous Gartner Magic Quadrant.

ZDNet: What is the outlook for flash?

When we look at technology we have two basic guides - the Critical Capabilities and the Magic Quadrant. The first rule is that if you ever look at products, don't read the Magic Quadrant.

The Magic Quadrant is about companies and vendors and the probable success of a company or a vendor. Now products are 10 percent of the Magic Quadrant so we look to see what companies have vision, execution, marketing and support.

Most IT companies look at the Magic Quadrant and say, "Look, I have this product and it is the best in the world so why don't we have a good position in the Magic Quadrant?" Well maybe we don't believe that the company will succeed, but most people just look at the Magic Quadrant and don't read anything else.

If you ever want to know about products, we always publish the companion paper, Critical Capabilities, which is 90 percent about the product and only10 percent about the company. In other words, it is the opposite of the opposite of the Magic Quadrant.

And interestingly enough the results seem to be transposed. Now the smaller companies that don't have high execution usually have the highest rating for the best products.

So if you look at flash products, companies like SolidFire, Kaminario and PureStorage come up highest in the product rating. But not all of them come up high in the Magic Quadrants.

There you might have HP. Now for flash, HP only have de-dupe [the automatic removal of duplicated data to save space] but they lack compression. IBM has compression but lacks de-dupe. EMC is a massive sales machine so it can execute very well, which is the answer for all of the people who ask 'why is EMC so high [on the Magic Quadrant]?'

In the same way that is why PureStorage is so high - they have had a vision and have managed to shape the market.

So when you look at flash as NAND or DNC chips that is what we call solid-state. We do that because people are coming up with all kinds of memory chips that store data all the time, but we don't like having to change the titles all the time depending on what the latest buzz-words are. So we use a title which allows more inclusivity.
solid-state-2014.png
Solid State Storage players in 2014 - the Gartner view.
Source: Gartner

Q: So taking the market as a whole and, for the sake of simplicity, calling it non-volatile storage, how do you see flash developing?

Well, we know that [magnetic] disks are not going to disappear but there is a problem in the industry. Tape is flat. Now if you are storing petabytes of data then you will use enterprise tape. Not DLT, not DAT but the Oracle and IBM LTO-6. You do that because you can have exabytes or petabytes in a corner that use absolutely no power.

So although people say that everything should be disk, if you are going to store things for a long time there is a lot of power and cooling. This is long-term storage we are talking about and long-term is five to ten years or more.

If the price of non-volatile media, which could be NAND or anything else solid-state, is priced right then the future of long term storage is very bright. So I would say that in the very long term, electro-mechanical storage devices will die. But we are talking ten or 15 years. Nothing changes in three to six years.

We believe that you have to go through two re-fresh cycles for anything in the industry to significantly change.To take an example if you have a lot of disks that are lasting three to five years and you then decide to buy some flash - that is your first cycle.

Then you really like your solid-state disk, or flash or whatever you want to call it, so when you go to re-fresh your technology you look at flash and then you think about phasing it in.So it is only after two or three cycles that most companies can bring in a complete change in technology, like a move to all-flash.

To find a comparison if you have two cars in your garage and you want to move to electric you are not going to change both cars immediately. You have one up for renewal and you change that but the other still has two or three years on its warranty and you wait until that is up before changing the second one.

So people write things like, "next year flash is going to take over the world" but we know that it just can't happen like that.

Also, you can add to that the fact that people cannot make enough flash, or enough non-volatile memory. That's a small problem that nobody likes to talk about.

Q: Why is that?

The main manufacturers are Samsung, Intel, Micron, [SK] Hynix and Toshiba and there is not enough factory capacity for non-volatile/flash storage to replace the disk volumes.

And if everybody immediately wants flash, then the price of flash will go up and one of the compelling reasons for buying flash will be gone.

Now I have been very much in favour of flash because it is the coming thing, but I also point out to people that if they have very large data volumes then flash is not suitable. Still, the flash market has been developing even faster than I expected - and I am one of the positive ones.

If you look at the sales between 2013 and 2014 then the sales of flash more than doubled - $50m to about $1.4bn. Nothing else has increased in the market at that rate.

Q: Why do you think that is?

Well, the performance of servers double every two to three years but disk didn't move for a decade. There was a disparity between the server/CPU and storage. Solid State Drives (SSD) have been around for decades but there are different kinds of SSDs. NAND, like flash, started coming out about five years ago and that increases storage performance. In the worst case it improves it about ten times and in the best about 100 times.

And then you have to understand that performance is not one thing. To understand performance you have to look system-wide. So if the CPU doesn't have to wait for data, then the servers can use solid-state drives. That means that customers sometimes don't have to buy new servers, they can spread the data across many disks and do all kinds of disk tuning things.

And customers also spend less time tuning, administering and doing things like striping array groups.

So flash isn't just about performance it is also about making storage administrators' lives easier. It makes customers happier because they get better response times and it makes companies happier because if you are in the cloud or online - shopping, trading whatever - that is transaction-based and there is a huge difference. Where you used to be able to do 1,000 transactions a minute not you can do 5,000. You can just do more. You can sell more tickets, you can get more views and so on.

And then, at the end of the day, people want to do analytics. They want to see who bought what and what people did during the day. Those number-crunching systems used to take all night long to run but now they have gone from six to 12 hours to less than one hour.

And the big difference is that where you used to run the data overnight, now you might see that something is wrong quickly and you can run it again two or three times. And that is all in the space of one night, not three or four.
solid-state-2015.png
And in 2015: HP moves into a leader place and Tegile makes an appearance.
Source: Gartner

Q: Isn't that immediacy one of the big benefits of flash?

It has re-balanced everything. It has removed the imbalance between server CPU performance and storage. It hasn't solved the imbalance. Flash is still extremely slow compared to the CPU, but it is extremely fast when compared to a disk drive.

Q: There is no easy answer to slow storage is there?

If you know that you are storage bound - that you are constrained by IOPS [Input/output Operations Per Second] - then you are but it is actually latency that we are solving here - it's not megabytes, it's not bandwidth. A good disk system used to [have a] five milliseconds [response time]. A fast one was 2, most people would do 5, a slow one could be 7ms. But let's say that an average is 5ms, if you go from that to an average of 0.2ms - with flash - that's a 25 times improvement.

So let's say that you don't really know what you are doing and you are getting a 10ms response time - if you put in a solid state array, you will get a 50 times improvement straight away.

And this is the source of the debate in the industry. Many people just put solid-state disk inside traditional storage arrays and they just go fast. And storage arrays today can handle about three different kinds of disk: 7.2K RPM, 10K RPM or 15K RPM. Now they can have SSDs as well.

Most storage arrays in use today, that most vendors have, were not designed for solid state. And the way you write data and distribute data across solid state is different from when you write data on disks.

So just taking solid-state disk and plonking it in an existing storage array can solve problems, but not to the extent that they can be solved with what we call a dedicated solid array which has been designed from scratch to just do solid state.

Another thing is that if you have a disk array and your controller is busy already it may well not be able to handle the throughput of the solid array. So lots of vendors who don't have flash arrays say that everything is fine like that because they don't have a flash array to offer.

Now you can add to that another interesting thing that is going on here. Flash arrays were designed to lay out data and often to do data reduction which is compression and de-duplication of the data coming in. You look at Pure Storage, Solid Fire, Kaminario, Tegile and so on.

So if, to take an example, a lot of people show up for a presentation and then you store the information from the presentation for all the attendees, you have a lot of similar data and these software systems de-dupe them so they are only stored once which is an immediate saving.

And then some of them compress only once, some compress and then de-dupe, some de-dupe and then compress and so on. What is interesting is that in the storage performance business there is an old saying that says 'the fastest write is no-write'. So if 1,000 people go to write a piece of data and your system already has that data then you do not do a write [of the data] and your response time is extremely fast.

Then there is another issue. Three or four years ago everyone was worried about the reliability of flash because the cells in flash memory wear out. So the manufacturers put algorithms inside the SSCs (Solid-Stat Controllers) so that the software was tuned accordingly and would do the minimum number of writes.

But, if you are doing de-dupe, and you write to a block once and then 1,000 more writes come in to write to the same block you will not have to write to the block that has already been written to, so you are not actually causing any wear.

So the reliability of solid-state arrays or SSDs to date is better than expected. The performance is as expected but the sales are even higher than we expected. Now you have to bear in mind that I was one of the optimists but the sales of these products have been even better than I expected.

Also a lot of these companies in the SSD markets have been around for three or four years and in my book that is mature enough. That is in the specialist SSD market. Most of the other major vendors, the IBMs, HP and BMC and others were all late to the market. And when you look at some of these mature players, a lot of them don't even have the features of the smaller players.

Many of the traditional vendors who said that flash arrays was not the way to go and [so] you should go for hybrids are the sort of people who just don't have something to sell. But then slowly, every year, these companies start producing dedicated flash arrays.

And then it is the case now that companies call up the vendors and say I want to talk to you about flash arrays, I don't want to talk to you about hybrid arrays or disk arrays with mixed SSDs in there.

Further Reading:

Samsung announces 16TB SSD

Diablo's flash DIMMs attack DRAM

The case against SSDs

Editorial standards