Five reasons to deliver a virtual service, not a virtual server

What began as an interesting technological experiment has now become an opportunity for IT to shape its future, says Fortisphere's Lilac Berniker

Commentary - As virtualization becomes an increasingly commonplace technology in the datacenter, the breadth of stakeholders in the virtual environment is growing.

Initially, the customer was internal to IT, including everything from print servers to development machines, the owners of which all carried IT business cards themselves. Now, as business applications become virtualized, these new stakeholders are less concerned with GHZ, more concerned with uptime, and clamoring for assurances that the virtual environment will meet or exceed their experience with physical servers.

What began as an interesting technological experiment has now become an opportunity for IT to shape its future. It can embrace this pivot point in infrastructure architectures as a chance to reframe the relationship with the business. The new relationship is one of a service provider, delivering to the business customer an ongoing level of performance in support of the workloads. Rather than being in the server business, IT can be in the service business.

To the business customer, this changes the ordering process. Instead of specifying gigabytes and gigahertz, the business simply assesses the critical nature of the application and the acceptable thresholds of risk tolerance, and trusts IT to provide the necessary virtualized IT. Instead of a point purchase at the start of a project, the resources can be modified over time, growing and shrinking to meet budgetary constraints and market demands. Freed from the bounds of a fixed set of resources, the business can run the business, while IT can manage the rest.

Such a model has five key benefits:

  • Increased returns on IT investment
It is well known that most risk-averse customers will request a great deal more capacity than will be required in order to ensure that they never run out at a critical moment. In the physical world, this manifested itself through the purchase of far larger servers than necessary, and these very servers are now being virtualized to support multiple workloads.

If the appropriate service levels, risk thresholds, and alerts are agreed upon between IT and the business customer, then IT can own the resource allocation decision, while ensuring the business receives the agreed-upon service. By cutting the excess over-allocated resources, IT can greatly increase the density of VMs in their environment, saving hardware and software costs.

  • Increased admin to VM ratios
Most firms find that as their virtualized environments grow, so too grows the number of system administrators required to support them. Both because a tremendous amount of the knowledge of the environment is stored in the minds of the administrators, and because the environment is not self-monitoring, many sets of highly-experienced eyes must watch for errors and triage issues simply to maintain quality and availability.

With deep configuration information, knowledge of critical workloads in high service tiers, and proactive alerting to potential issues, a smaller team of administrators can dependably manage growing numbers of VMs while improving service quality.

  • Prioritized response time
While every workload is important, not every workload is equally important. Some require immediate triage and response, while others can wait for morning. Some should have significant capacity buffers while others can run hot. Without a service framework governing the priority of the workloads in the virtual environment, administrators are often left responding to each support call or request for additional resources with the same level of urgency.

By creating separate thresholds and alerts for workloads of different service levels, prioritization decisions of both administrative time and IT resources can be made according to the priorities of the business.

  • Delighted business customers
In many firms, the business customers are leery of the virtual environment, which often comes with different metrics, different types of failures, and different levels of visibility. The inherent flexibility in the amount, location, and interconnectedness of resources which creates the IT advantage engenders an understandable level of distrust by the customer.

To overcome that distrust, the key is to provide the customer with enough visibility to assure them of the quality and consistency of the service delivered. Sadly, most virtualization management consoles are both overwhelmingly powerful and complex, and wholly unsuited to the needs of the business customer. Instead, a role-based web-based dashboard, which is tailored to answer specific customer questions on performance, uptime, and configuration changes can go a long way towards allaying concerns.

Further, customer self-service can go a step further, enabling them to answer their own questions about what happened, when, and what was the impact of the change. By placing these tools at their fingertips, IT can anticipate a decrease in requests for additional resource and in troubleshooting reports. By exposing more information about their environment, the customer will be far more trusting of this new infrastructure paradigm.

  • Confidence in the infrastructure
The low hanging workloads having been picked, IT is now in the position to expand its virtual infrastructure to support increasingly critical workloads. The challenge that must be overcome is one of confidence in the infrastructure, not only by the business customer but by IT management. Most of these workloads are important enough to the business that failure comes with serious consequences.

In order to demystify the black box that is the virtual infrastructure to the IT management, they too could benefit from some visibility, not only to the configurations and utilization levels, but also to the service being delivered to the business. Armed with statistical information on uptime and resource consumption, IT management is in a better position both to defend the strength of the infrastructure and support ongoing investment. Perhaps most importantly, consistent demonstrable success builds confidence, which drives organizational support and growth.

As with all major transformations in the IT world, virtualization brings with it change beyond the physical hardware and the bits and bytes of software. IT is increasingly critical to the operations of most businesses, and therefore the organizational implications during this wave of change are more far reaching than before. However what may seem like a burden is really a tremendous opportunity for IT to infuse itself more centrally as a core partner to the business.

Of course, there is an opportunity to resist. With the growing availability of cloud computing, it is possible simply to outsource the entire virtualized infrastructure, leaving the service providing to actual third party service providers. Some larger firms have begun that process already, in an effort to avoid this very organizational change.

However, once the virtualization is outsourced to the service providers, IT organizations find themselves managing the existing systems: the unix boxes and the mainframes, all of which are next in line to be virtualized and outsourced. IT becomes a legacy division, actively contributing to its own irrelevance. Most CIOs are not seeking this eventuality.

Faced with the choice to move forward, embrace a service model, and pursue greater partnership and integration with the business, or move backwards to the known territory of mainframe management, most motivated IT executives and systems administrators will chose the former path. Gladly, they are no longer alone in this choice, and with their colleagues and peers in other organizations, the future of virtualization delivers on better ROI, lower costs and much better service.

Lilac Lilac Berniker is senior director of business development for Fortisphere.