It's not unusual to walk into a corporate data center and see rows and rows of rack mounts - all busily churning out heat and cycles -and then hear the local IT management group promise that consolidation will soon reduce the acreage and energy used by the data center.
Thus it's not atypical for a data center operator with, say, 1,000 wintel servers running at an average 4% CPU utilisation, to promise management that he'll use virtualization to cut that to 100 physical servers within some time frame like one year.
It sounds good: on paper his average utilisation still won't hit 50%, and his energy and space costs will go down by 90% while the number of OS instances he has to look after will go up by only 10%.
It's all very cool - and green too - except for two issues:
(1) users don't care about utilisation, what they want is fast response - and every improvement in system utilisation is accompanied by a disproportionate increase in service times. Bottom line: if you measure management on utilisation, you get better hardware utilisation - and reduced user productivity.
(2) real world consolidation is almost never CPU or RAM limited - instead storage and networking turn out to be the performance killing issues. Bottom line: for every dollar you save on installing and operating server hardware, you're likely to spend several dollars fighting a losing battle on storage and network throughput.
Most importantly, there are better options: just running multiple applications on the same OS instance avoids the overheads associated with switching and OS copies, adopting Lintel reduces overheads and failure risks, switching to Sun's CMT/SMP technologies under Solaris offers radically better performance for less money.