Can hyperconvergence simplify storage?

Q&A with Ron Nash, CEO of storage company Pivot3.


"In the future we are going to have a long-term view where there are multiple public clouds and just millions of private clouds and they will be hyperconverged," says Pivot3 CEO Ron Nash.

Image: NBC

The first market for storage company Pivot3 was in video surveillance, but now the company is better known for its 'hyperconverged infrastructure', or integrated compute, storage, networking, and virtualization technology. It currently claims 2,000 customers and over 18,000 deployments.

ZDNet recently spoke to company CEO Ron Nash to find out about the company's past, present, and future.

Pivot3 has been around since 2003. Did you start it?

No, I was an early investor. Most of my career was as an operating executive, rather than an investor, but I decided I wanted to go back to the operating side.

In 2013 I came on as CEO. Before that I was a non-executive investor.


Pivot3 CEO Ron Nash: "The founders went 0-for-5 with the storage guys who killed that idea, so what did they do? They founded the company."

Image: Pivot3

As a company, we took on something that was really difficult, and it took five years to get the first product out.

Now this was unusual, to have a big, technical product that took five years to become a commercial product. The founders were all technical and they had a vision of how to radically change something.

They had told me that they could get this done in two years, but it took five.

They were the storage team out of Compaq Computer and became part of HP [when HP bought Compaq in 2002] and they had done seven generations of storage products for those companies.

They had an idea how to do this software, I invested in the idea. It was a very powerful idea but we had a 35-man development team working on it for five years and we had no product, no customers, nothing.

But when they broke through and did it, it became a great success and is still driving us today.

What was the breakthrough?

When you think about storage products, they all have the same design, and the same components. The drives are commodity-priced -- Hitachi or Western Digital, or whatever. They use RAID software on top of those and then they build a controller, which is the pipeline in and out of the storage. That controller had a couple of hardware pieces and usually a couple of chips. How you do all that determines the performance and the price.

Now there are two important things this controller needs that decide the performance of the drive: number one is speed and number two is reliability.

It is easy to get reliability in hardware. If you want reliability for enterprise you just make two or three copies of something and you always have back-up. So what they did was, they looked at the clock speed of the Intel processors and they said, 'you know, everything that we do with controllers in hardware we can run fast enough. The whole challenge is how do we get the reliability'. That is what took five years.

They did what, in computer science terms, is called erasure coding -- a superset of RAID. Now the simple way to reliably store information and protect it is to write it multiple times in multiple places.

In enterprise-class systems, to ensure reliability you had to store it three times in three different places so you need three times the physical storage space.

What erasure coding does is it allows you to have, at a minimum, 65 to 70 percent of the physical storage being unique data, and the maximum slightly over 90 percent of the physical storage, to hold all the unique data.

It's a linear-type algorithm and nobody had ever done that across appliances, across services, in a distributed manner. People had done it centralised inside one server but nobody had ever done distributed erasure coding -- and that's the fundamental breakthrough that they had.

So the comparison with our competitors is that they will use twice the amount of storage that we will for a similar amount of work just because they are writing things multiple times to protect it.

As a company, you have been around for some years but you are not well known. Why do you think that is?

That's because the company's original plan was to license this software to server manufacturers.

The founders never thought they were going to build this company, they just thought that they were going to license [the technology].

The server manufacturers all liked it and they started working it up through the corporations. It went higher and higher, and everyone on the server side liked it, and then when it would get up to a manager who was not just responsible for servers but for storage too. He would say, 'let's ask my storage guys and see what they think about it'. That's when they would hit a wall. The storage guys would say, 'this obviates two-thirds of my storage line. Why would I want to do that?'

So the founders went 0-for-5 with the storage guys who killed the idea. Now what did they do? They founded the company.

But they still kind of held onto the idea of licensing the technology, so they thought: 'let's find one big market that we could turn into a big success'. The market they picked was video surveillance.

Now, at that time, people were switching from analog cameras to HD cameras. More and more cameras were going up and they were having to store more and more data and that meant that the market was exploding, growing revenue at something like 62 percent a year. On a bytes basis, it was something like 400 percent a year.

That meant that the data that needed to be stored on video was growing at such a rate that there was no way to do it on a replication basis. How much data needed to be stored? Our largest customer has a single system of 7PB of storage. You can't write 7PB's three times.

So, the company just went to market as though they were a video surveillance company. They didn't talk about the technology and as a result built up a substantial business based on that one vertical.

Now how did they go from video surveillance to being a storage vendor? They had customers using the technology for video surveillance, but they started adding more applications on top of it.

Those customers thought that this was a pretty neat platform that just ran, needed minimal upkeep. Those guys would think, 'I could run this technology as a storage system and I wouldn't need a storage administrator'. Our storage was half the price of EMC storage and so the users started adding more applications, and that's how we got into the datacentre.

And that's why the company was under the radar screen for so long.

That was all happening about three years ago. How are things now?

Well back, say, in 2015, most people who bought a hyperconverged system bought it for one application. Maybe if they were going to buy a virtual desktop system, or a backup machine or, in our case, a backup video machine. But whatever they were buying it for, it was one application. But that changed.

Our 'Crossing the Chasm' moment was in 2016 when people were buying it for multiple applications from the start. People were visualising our system as a platform.

Where are your customers coming from now?

Last year, 33 percent of our revenue was from customers buying it for multiple applications. Some 25 percent were buying it for certain services such as running a database or a Microsoft app, but always a single application. And 10 percent was people buying it primarily for storage.

Last year you bought a flash storage vendor. Why was that?

We acquired a company called NexGen, which had two things that we wanted. At that time, when introducing a new storage system, the strategy was usually, 'throw away all that old stuff and use this new stuff'. In contrast, NexGen had this policy management system which was a layer that overlay, you could put over our products and extend into the cloud.

Now you could use this to look through all your legacy applications and decide that, of all the applications you were using, which ones were, say, the most critical for response time. And you could then say, 'these are the ones that I have got to have and then with the others that do not matter so much, I could put those in the least critical places'.

With one kind, you needed the best systems but with other, the cheapest would do. If you were using the data in, say, at a hospital where reliability and speed are essential, you would use the best. When you were just looking at casual apps on the web, you wouldn't worry so much. All of this was important for us because a third of our users buy our software for multiple applications.

If you looked at a system in Global 2000 companies, say, in 1990, you could outsource a datacentre. They weren't risky systems that you ran. They were an SAP or a Siebel CRM system and you had to watch the response times and all, but you could sort of do that manually.

But if you look at that same Global 2000 company today and all the servers and the applications they are running -- and they are probably running five to ten thousand virtual machines with applications -- then you can't do that manually.

To assure the right performance and the right cost-basis, you must have an automation layer and that is what that policy automation layer does for us.

Would you say that you have almost coined the term 'hyperconvergence'?

The term hyper-convergence? It's a simplification. If you look at hyper-convergence, you could think about it forever. The functions of a datacentre -- like processing, storage, networking, and security -- they each have their own software, their own hardware, their own chips, and their own staff.

Now what people are saying now is, 'let's get rid of all this complexity; let's use the most commoditised products' -- products like the x86 products which people will sell at razor-thin margins. Instead of products from an EMC -- which may have a 70 percent margin -- [companies want to] buy a server where there might be a four-percent margin.

Then you just think, 'let's take those servers and everything else will be just software on top of that'. Do that and it's much simpler to manage and you can get rid of all this specialised staff.

It's a path between legacy and cloud. Cloud can offer, in some cases, ease of operation and cheaper operation. Hyperconvergence gives you many of the pieces of cloud but you still control it. It can sit side-by-side with, and you can operate it with, your legacy systems.

I think in the future we are going to have a long-term view where there are multiple public clouds and just millions of private clouds and they will be hyperconverged. And the x86 platform is currently the foundation for that.

This is all a huge, sea change in IT operations.

Read more about convergence