Commentary - When it comes to IT, organizations can't, as the saying goes, manage what they can't see and measure. In fact, organizations have spent about 30 years building their existing IT infrastructure: from mainframes to internal networks and client/server applications and on out to the Web. For the most part, organizations know how to measure their risk and performance – and they have the clarity necessary to be able to optimize their systems. Unfortunately, the same isn't true for the emerging and rapidly embraced technologies such as virtualization and cloud computing.
This raises the question: How do organizations guarantee the same level of maturity they enjoy within their physical infrastructures as it's extended into virtualized and (eventually) their cloud environments? How do they ensure uptime? How do they measure the risk associated with their workloads? In short: how do they lift the technical fog associated with the incremental addition of virtualization and cloud computing services?
The word extend is no accident here. Enterprises (except for perhaps the rare startup) don't move to virtualization or to cloud computing: they extend certain portions of their infrastructure in these directions. The distinction is minor – but important – as we will continue to manage a hybrid of physical, virtualized, and increasingly cloud IT consumption models for many years to come.
And this will demand that IT departments keep their identity, access, security, and regulatory compliance policy enforcement consistent among all three IT models. There are numerous management commonalities among physical, virtual, and cloud but perhaps identity is the most integral. Because identity is so crucial to successful IT management, security, and regulatory compliance, we will use identity in relation to their business value and business objectives as a central theme as we discuss what is necessary to manage workloads successfully in all environments.
Know your workloads
To succeed in our increasingly complex IT environments, IT needs to understand and manage its workloads in the context of their environment.
The same is true of athletes. Consider a runner who is accustomed to racing only on tracks in the Midwestern United States. This person is not going to be able to perform as effectively on sand as someone who has trained regularly on the beach. Also, the same two runners won't function as well if they both run a marathon one mile above sea level in Denver, Colorado. In each example, it's not the runner who has changed, but the runner's environment. Now, if you were a track team coach, you'd certainly consider the experience of each runner for the track (or environment) where they're going to be asked to perform.
Why don't we approach our IT workloads in the same way? Before we ask workloads to perform in certain environments, we need to know a number of things about them. First: is the workload mission critical? Does the workload need to perform to five nines, or would three nines of availability suffice? There is a big difference in the expense associated with running the two. That's why mission criticality is one of the first criteria that must be determined about where a workload can, or should, run.
Another important criterion would be the confidentiality of the workload. Is the workload highly confidential? Does contain proprietary information that can't be allowed to leak to the public or into the hands of competitors? Perhaps it's highly regulated customer information, such as health or bank account information. In this case, the organization needs to be careful of the security of the infrastructure the workload runs – as well as that of the workload itself.
Another criterion: how much would it cost the organization should the workload become unavailable? For instance, a software development company may look at workload costs different than a retailer. How much revenue is lost by a national retailer when its e-Commerce platform goes offline for an hour? That could be a much different cost than if an international team of developers can't access its workload for an hour or two. The determination of this cost will help the business to know how much it should spend to ensure optimal uptime and what service levels would be necessary.
Intelligent workload management
All of this may sound simple. Yet, the reality is that many organizations don't view their workloads in this way – despite how important it is that they do. Why? Because users often are bad IT coaches, and they're very likely to try to extend workloads into environments where they're not well suited to perform.
Users may, for instance, try to put a workload that contains confidential information onto a public cloud environment – but that most likely will violate security policy. By having the context of the workload’s identity embedded and the proper management tools in place, the workload won't allow itself to run in a public cloud, and will revert to an internal cloud, or another location where it is permitted to run because the required security controls are trusted to be in place.
Think this type of identity-based dynamic orchestration is far-fetched? It's not. A new market is emerging called intelligent workload management. One where the workload is policy-based, secure and compliant. We're going to start seeing more confidential workloads extend from physical and virtualized data centers to cloud. And we are going to start to see more regulations that relate to cloud computing. These regulations will determine where and when certain types of workloads can operate. Imagine a health care provider, for example, that wants to reap the benefits of cloud computing for some tasks, but wants to make certain that HIPAA-related workloads remain on-premise. By encapsulating the identity attributes of workloads with regulatory covered information within them, the provider can make certain that such regulated workloads don't leave their internal data center.
The same disciplines can be used to help save IT budget, too. Workloads that don't need superior SLAs won't be run in highly-available environments. Also, classifying workloads based on identity will help to ensure they're managed properly through their entire life cycle, as organizations will know the value of the workload, the criticality of the data, and when the workload can be retired. We've already witnessed the mistakes of not managing virtualized machine workloads. We've seen customers with thousands of virtual machines and IT managers who don't know who created them, or what their original purpose was. These companies literally are deleting all of their virtual machines and starting over.
Such challenges will only be magnified as IT infrastructures increasingly are extending to cloud. This is why it is important to tackle these challenges now. Because once an organization embraces virtualization and cloud computing, the expansion is swift and if the right processes are not in place, control can be lost quickly. However, by simply classifying workloads and encapsulating some basic information about their identity, value, business criticality, and confidentiality, there is no reason why the full value of virtualization and cloud computing cannot be attained.
Richard Whitehead is director of new market strategies at Novell. Richard has also held senior positions in product management and marketing for Citrix, Franklin Covey and WordPerfect.