Moshe Yanai has an impressive track record in the storage industry. Some 20 years ago while working at EMC, he developed the Symmetrix storage array and from there he moved on to start up the mainframe storage array company XIV, which was sold to IBM.
He's now come out of retirement to work on another new project, a mainframe-class storage company called Infinidat.
The hottest innovation in storage currently is flash memory. It seems that everybody in the IT storage world is talking about flash, a technology that offers fast storage at a high cost when compared to conventional storage.
Now Infinidat says that you can have the best of both worlds. The company's storage arrays combine conventional storage with a tincture of flash to offer not just high-end flash without the cost, but with 1m+ IOPS performance and 99.99999 percent reliability -- 7 nines when the industry standard is 5.
ZDNet talked to Yanai and his enterprise architect Chip Elmblad to find out more.
ZDNet: What tempted you out of retirement to go back into the storage business?
Yanai: I have been in the business for, I think, 40 years. I started off with CDC in 1975 on disk components and my first component was 2.5MB. Of course, today, in the same size, we have five petabytes.
What I find in this industry is this phenomenon: the cost of storage is decreasing, so IT expenses have become critical because companies want to take advantage of the data available and so forth, but then we find there is a shortfall -- a shortfall in what you can do with the information, in the ability of the industry to utilise this vast amount of data.
I attended a talk recently with IDC and they were saying that the amount of data we can now store is so vast that we are only able to use five percent of it. When I came out of retirement five years ago, they were saying that there is no kind of a solution for this.
So, I ask myself, how can it be resolved? There are a few problems there. One way is with flash but that's expensive. Then, you can do things like software-defined storage and other things but the issues are always the same. One is availability and the other is cost.
One solution is to design a system specifically to deal with big data, really big data, multi-petabytes of data. It has four attributes that make it unique.
The first is the ratio of disk to server. If you look into any "white box" or other servers, you will see that the servers might support eight drives or some other small number. We need to get into a situation where one server, with triple redundancy, can support hundreds of drives. With us, one server can support many, many drives. That is one advantage for us.
The second is that you don't need flash. We can utilise flash drives for caching but most of our system can be spinning drives, which are cheaper. Flash can be more expensive by a factor of 10.
Most companies use expensive flash for speed. We can get the speed without needing flash. We use standard drives.
Then there is efficiency. We have 72 percent efficiency of data before full compression. Some people will talk about the cost of flash going down but they do it without talking about effective capacity.
Why did you get into flash?
Yanai: Flash is for speed... but flash is expensive. A lot of marketing today talks about the price of flash going down but that's not correct. Now with the sort of drive we are using, we can be at least 10 to 20 times faster [than flash].
Chip Elmblad, Infinidat enterprise architect: What we are seeing is that the requirements of a typical enterprise customer, say, a financial institution building out a next generation datacentre or those of a cloud provider -- be it an MSP (Managed Service Provider) or a cloud service merging -- a lot of their services are being pushed down into the datacentre. And then a lot of services are being pushed into the cloud, whether it be private, public, or hybrid.
A second trend is a commensurate shift in consumption models, from pure cap-ex to op-ex models. Then there is pressure to control costs all around, whether it is the cost of mundane things like floor tiles or power, the accent is on reducing costs. IT managers need to know price-per-gigabyte-per-month and metrics like watts-per-terabyte. We see that people are not concerned about GUIs but things like KPI (Key Performance Indicators) strength.
They want to programme all their provisioning into a tool that they have developed. The end result is to lower the full-time employee ratio to the device managed. And where we used to be megabytes, now we are talking about multi-multi petabytes being managed by a C-level employee to reduce costs.
So the amount of data being managed is rising and rising, as it is driven by things like the Internet of Things. The IoT is growing exponentially. All that is outpacing the data-reduction technologies.
We think the trend in 2017 is the rise in "hyper storage". That is ultra-high density, enterprise-class, software defined, flexible storage for the cloud and enterprise data centres.
One of the main characteristics of that will be that it will run on any hardware -- it does not have to be proprietary, or hard wired into a particular model. People want to see a hardware stack that can adapt to any media type.
Now AFAs (All Flash Arrays) are there for performance and they have characteristics that you must program around for reliability, for scale and so on. But that's somewhat easy. You put a media type in there that has millisecond response time, that has performance, but what Moshe's team has done is de-couple performance from the underlying media. And it is a persistent media.
Now in the past if they wanted performance, people put the whole array in flash. But now using our technology we know that we can increase your cache-hit ratio to 80 or 90 percent, regardless of the workload, and that hit rate gives us an advantage over all-flash arrays.
What we are finding is that in the field, our average cache-hit ratio is about 80 percent for our customers, regardless of workload.
Yanai: Another important feature is the ability to cut the data into smaller chunks. We cut it into 64K chunks so then the drives play together to any request immediately. And when we write to any disk we do so in a sequential manner.
The combination of the efficient use of the drive with the caching put together means that in most of the workloads we can do, we are faster than all-flash drives.
Elmblad: And with our system we need to have the capacity and size of systems like those in the Amazon's of this world and the Google's -- and we do. We must, because that is what a hyper-storage system is.
Now we had a range of design goals for our Infinibox system. One of unmatched reliability. We are designed for 7 nines reliability (99.99999% reliable). There are multiple n+2 pieces within the system so for any component in the system that might have an impact on availability, there are at least three.
We can suffer a failure of two disk drives without any issues. Not only that but we can rebuild all that data -- 16 terabytes -- in 15 minutes or less. That lowers our exposure and it gives the customer a much higher confidence in the systems.
There are some tremendous software innovations that Moshe's team has put in to achieve that. To give you one example, they developed their own RAID schema. It's not RAID 6, it's not a system to resolve a failure through algorithms, but it allows us to fail any two drives and rebuild them very quickly.
Elmblad: Now on performance there are hyper-scale systems out there --Amazon, Facebook, and so on -- and they have large capacity but they also have high latency and you just can't do that if you are going to actually deliver hyper storage. You have to have performance that is equivalent to the AFAs out there.
Now there are a number of ways that we can achieve that. Take our tree structure. When data comes in it goes into cache, and there is a terabyte of DRAM in the system, and 110TB of SSD which we use as a read cache.
So all of the negatives you hear about -- like the passive state charger loses data, or that with SSDs you can't do too many writes because writes kill SSDs in all flash chips -- don't affect us. You don't have to worry about it because it's a read cache.
If we uses SSDs to serve up data to the applications, it is because if one of those SSDs fails we just throw it away. It has no impact on our reliability.
Yanai: And that is the crux of it. The type of drive we use is irrelevant to the performance of the system. We use best-of-breed, whatever is out there. If a better drive comes along, we can use that. It is irrelevant to us and it does not impact on our performance or our reliability.
Today flash is an extra cost, so why pay extra cost if you don't need to?
You say that your system can operate at very high performance and very high reliability but claims like that make IT managers sceptical. How do you get past that?
Yanai: We know the real world for IT managers. We know what the priorities are: 'If I lose performance, it is going to hurt me, but if I lose data, they are going to fire me.'
So we used two things and as always with these things you start at the bottom and you climb the ladder. We have built this system with the intention that we build it for the big customers, the Fortune 500.
So we need two elements. One is to have a valid proposition. Ours is that we are not going to save you 10 percent or 20 percent, we are going to save you 90 percent.
Now this is one thing we offer but that is not sufficient. The second thing we offer is a relationship. People know me and they know about my past, about EMC, about IBM, so I need to use that to get them to do a proof of concept, but even to get them there, they need to feel safe.
So then we can get them to try to use it and they can start it with something basic like backup. And then we have the references with the big users.
That is how we got started but it is much easier now. Our customers include BT, Brightsolid, Raymand James Investment, Hawai'i Medical Service Association, and TriCore Solutions.