perspective Particularly in these straitened times, people tend to assume server virtualization is one investment that will reduce IT costs. Is that assumption true?
Yes and no. Historically, the proof for return on these server-virtualization investments has usually centered on reductions in capital expenditure, or Capex: reductions not only for data center facilities, but also in hardware spending and associated maintenance.
Virtualization tends to control the proliferation of physical servers and makes existing and new server investments more effective as shared resources. So far, so good.
But what about operating expenses, or Opex? Opex usually includes things such as facilities--power and cooling, for example--as well as human labor costs, including salaries, bonuses and insurance. Labor costs typically make up the lion's share of any IT budget.
Energy savings may become one of the primary goals of a virtual server investment and so justify the cost in terms of delivered benefits. Still, that leaves us with the impact of virtualization on labor costs, which is critical if an IT department is to derive a realistic return on investment.
It is in the category of labor or system administration costs that the utility of the server virtualization investment becomes somewhat less clear. Just as Amdahl's law says, the speed of the system is governed by the speed of the slowest component. The benefit of a virtual-server investment will ultimately be governed by the cost of the largest associated component: systems administration. How should we set about measuring those costs?
Few models exist to assess the impact of server virtualization on system administration costs for IT organizations. That shortage is exacerbated by the failure of many organizations to measure system-administrative costs in a non-virtual environment for comparative purposes.
One answer is for organizations to develop a server virtualization model to assess total system-administrative costs. This model would include activities such as root-cause analysis to account for the often hard-to-calculate costs incurred by the added complexity of a virtualized environment.
You could borrow approaches typically used to measure software complexity. A software complexity-based approach using both 'white-box' and 'black-box' techniques could help establish an appropriate costing framework, irrespective of the activity being analyzed.
In the white-box method, we need to understand the internal structure of the program--for example, the lines of code or number of 'if' statements. In the context of virtualization, this approach could translate to aggregating the timing of the administrative steps in a service response to an end-user request.
The use of a white-box methodology works when the specific steps are well defined. Alternatively, an organization could adopt a black-box approach when the specific administrative procedures are unknown or immature. If we cannot see inside a process, we may be able to infer the complexity--and thus the cost--from an external perspective. Using this method, we would try to define cost as a product of the number of interdependencies.
We could look at other virtual servers on the same hardware: the virtualization hypervisor layer; server resources such as memory, CPU, the network and disk adapters; and the SANs. We could assume that the work to ascertain the source of potential problems increases with this complexity.
The difficulty in measuring costs does not necessarily weaken the argument for investment in virtualization technology, but it could reduce some of the net opex benefits. The degree to which that reduction occurs would depend on the values assigned to the variables in the models.
However, the key point is not to assume blithely that there is only an upside to the operational impact of server virtualization. By taking a more rigorous approach to costs, you will enhance your credibility with the key decision makers in your organization. Also, by exposing the potential difficulties, you could also be laying the groundwork for improving these potentially problematic processes.
Cameron Haight is a research vice president at Gartner Research. His research focuses on the management of server-virtualization environments such as VMware, Citrix and Microsoft, including the development of operational best practices. This article first appeared on ZDNet Asia's sister site, ZDNet UK.