Are mission critical systems the second cloud computing wave?

Complicated legacy systems are a lot harder to move to the cloud, but this isn't going to stop companies from trying.
Written by Colin Barker, Contributor

Video: Portrait of a modern multi-cloud datacentre

Dell's public cloud company Virtustream has 1,500 employees and more than 20 datacentres and operations across 10 countries, helping enterprise customers migrate and run their applications in the cloud. ZDNet spoke to its EMEA CTO Roberto Mircoli to find out more.

ZDNet: Tell me about the company.

Mircoli: Virtustream is the public cloud company of Dell Technologies. It was founded in 2009 and then was acquired by EMC in 2015. When the Dell/EMC merger happened in 2017, Virtustream became part of the Dell Technologies family.

Going back to 2009 when Virtustream was founded, it was initiated by five founders. It was nothing like a sexy Silicon Valley startup because the five people who started Virtustream were really senior industry veterans. Rodney Rogers, now CEO, and Kevin Reid, CTO -- and came with significant experience deploying large ERP systems in the US.

The core experience of these two individuals is in what it takes to deploy large-scale systems. The other two founders had deep expertise in infrastructure: networking, storage, compute, and virtualisation. That made for an interesting combination of expertise. The fifth founder had run the R&D centre at SAP.

These people combined a complementary skillset. This is significant because what they were funding was a very specific market opportunity. That was resolving the engineering issues relating to the possibility of moving to the cloud.

Now that meant handling a category of workloads that were not designed for standard cloud adoption -- the legacy, mission-critical applications.

Isn't it true that many companies don't have the wherewithal to do that when the key point is the infrastructure?

That's very true, especially for this particular category of workloads. Workloads like SAP, to pick an example. There you have an entire set of workloads that belong to this category -- mission critical workloads. What do you call mission critical? Anything that, if it breaks, is a fatal issue. That is a business definition of what mission critical is.

From a technical perspective, these application systems are typically monolithic. They are quite legacy -- what they have today might be the result of decades of customisation. They might also run on legacy systems like the mainframe.

But what they have is a fundamental architectural dependency between the application layer and the infrastructure layer because of the way they were applications constructed. They depend on the underlying infrastructure of networking, storage and computers for their performance attributes, for their reliability. So this relationship between the application layer and the infrastructure layer and this category of workload is absolutely tightly coupled.


Mircoli: "We can guarantee that those systems will never go into starvation of performance."

Photo: Colin Barker/ZDNet

We believe that the core experience we have is crucial if you are addressing this category of use case for cloud adoption. That is, you are not addressing a generic, general purpose application which may be natively designed for cloud or migrated to cloud, and you are not addressing a matter of governance that is not really a complicated, technical issue.

If we want to put what we are doing in the perspective of what we see happening in the cloud industry, I think that in its first eight or nine years of the development, the cloud industry would constitute the first wave of cloud adoption.

But all of that was related to non-mission critical applications. So, think about application development -- the general purpose applications, emails, and so on. That's quite natural in systems. The first wave address the low-hanging fruit of cloud adoption. What's not low hanging fruit is the other category of applications.

Download now: Special report: The future of Everything as a Service (free PDF)

Then there is the other category of applications that sit in a typical enterprise environment -- the mission critical workloads, the second wave of cloud adoption where the market is more mature.

This is where enterprises are getting rid of legacy systems and the overhead of over-complex systems. This is where Virtustream comes in.

Businesses see cloud as their way of rectifying their operating model. They have moved from a position where the onus on IT was to keep the systems running to one where IT is really defining the information infrastructure of organisations.

Helping organisations to use cloud to work with mission-critical applications is what we do. It is not general-purpose cloud, we do cloud in a quite special way, in a different way compared to general-purpose, hyperscaler cloud models.

Can you give me examples of this?

The way our cloud nodes are designed and architected is very different from a typical cloud architecture. They are more similar to the design you would see in a typical enterprise's private datacentre.

For example, it is unique in terms of isolation in logical zones and this is required to, for example, guarantee the stringent security and compliance requirements of mission-critical workloads.

There is an information blueprint that takes care of those fundamental requirements of the workloads.

So, security compliance is one. The other is a guarantee of the performance of those applications. With mission-critical workloads there must be no performance degradation -- it is simply not an option.

See also: Quick glossary: Hybrid cloud

In order to guarantee the performance of those applications as they move into the cloud you have to manage the resources of the cloud. That means the compute power, the storage, the memory availability and also ensuring that they work within different segments of applications in a way which ensures the availability of resources. This is where our core intellectual property comes in.

And that's what you've done from the earliest days?

Exactly, it's the foundation of the company. As I said, the company was founded to address the engineering problems of this use case.

One of the solutions to that problem was to come up with some intellectual property which we call MicroVM. This is essentially a way to create a fundamental unit of measure of the four dimensions of resources required by a system which is a combination of networking, storage, compute and bandwidth.

When defining this very granular unit of measure -- the combination of these four -- we use it to basically slice down the resource requirement of every workload, and we do it in real time.

From one end this allows us to very closely monitor the consumption pattern of those workloads. On top of that, we use very intelligent monitoring techniques to ensure that at any specific moment -- any spikes or whatever -- those workloads always have very granular increments -- whatever they require in that four dimensional space.

It sounds very complicated -- it is very sophisticated -- but the net result is that the workloads that we are running in the cloud are always fed with the resources that they need.

So, what you are saying is that the system just keeps going and when it spots problems coming it has allowed for them and fixes them on the fly?

Absolutely. It has to be the case. If your business is running mission-critical workloads in the cloud, either you spot a smart way to do that or you are out of business.

You are dealing with mission-critical applications so your users will not consider you as an option unless you can solidly guarantee that.

Related: Cloud migration decision tool

We've engineered the entire company around this. We only address companies that run mission-critical systems.

What we have done is taken infrastructure-as-a-service (IaaS) and combined that with managed service. So, it's not just a cloud that is suitable for mission-critical applications, combined with that is service management.

Also, when a company migrates services to the cloud they have the option of making us responsible for those service. If you move to the cloud the complexity does not just go away, somebody needs to manage that, to make it secure, to architect a disaster recovery architecture. With us a customer cannot only adopt a reliable cloud architecture but can off-load the complexity of keeping those systems working.

This is a new definition for cloud: a public cloud for mission-critical workloads which is also a managed public cloud.

Recent and related coverage

HPE's hybrid IT approach pays dividends in Q1: Will growth continue?
Hewlett Packard Enterprise delivered a strong first quarter and sees solid IT spending ahead, but the comparisons and ability to pass on increasing memory costs are challenges.

Cloud computing: Now hospitals can keep confidential patient records in the public cloud
Healthcare organisations get the green light to use cloud computing services to store sensitive information but is it safe?

Infographic: Companies are turning to hybrid cloud to save money
Companies are increasingly embracing hybrid cloud as a strategy on its own, or as a stop on the way to an entirely public cloud model.

How a hybrid cloud digital transformation saved a 96-year-old company
The Open Door Corporation found that moving away from legacy technology and jumping into the world of hybrid cloud transformation could come to the rescue.

Densify review: Take control of your public cloud spending
Densify is a capable and affordable cloud optimisation service, suitable for any company struggling to get to grips with its expenditure on public cloud services.

Only 16% of organizations believe their current security can protect them in the cloud(TechRepublic)
A new survey by Crowd Research Partners, in partnership with a group of vendors, predicts that cloud security budgets will see a median increase of 22%.

Editorial standards