META Trend: Through 2003/04, infrastructure consolidation will be driven by value-based portfolio management, but remain impaired by non-linear server pricing, immature tools, service-level priorities, chargeback, and organizational politics. Physical co-location and networked storage consolidation will be widespread during 2002/03. Premium high-end server pricing, coupled with immature partitioning and workload management, will hinder higher-level OS, DBMS, and application server consolidation for Unix (until 2003/04) and Windows (until 2005/06).
By 2003/04, 70%+ of ITOs will have implemented the initial phases of infrastructure unification (e.g., physical co-location, centralized operations/management, shared storage, same-workload consolidation). However, few ITOs will have implemented significantly different workload consolidation projects for Unix and Windows servers, due to the price premium (2x-3x) of “consolidation” servers (>8-way server) and, to a lesser degree, lack of mature workload management (WLM) and partitioning tools.
Through 2004, high-end "consolidation" servers will face aggressive competition from well-managed commodity 4- and 8-way Intel servers. By 2006/07, high-end RISC/Unix servers will be all but replaced by commodity Intel offerings from HP, Dell, and IBM that leverage native and third-party consolidation features.
Because most ITOs support roughly 5x-10x more Windows servers than Unix servers, we believe Windows consolidation represents the larger and simpler opportunity for TCO reduction. Our research indicates that same-workload consolidation (e.g., Exchange, file and print) can result in a 5x reduction in the number of servers. This is because either the servers are underused (i.e., many Windows servers have <20% utilization) or older servers can be replaced by much more powerful systems (i.e., exploiting Moore’s Law that processors double in power every 18 months). Although Windows consolidation is possible for workloads that are the same, it is difficult to do with different workloads because of the Windows limitations in isolation and resource scheduling.
Poor isolation means that, if an application behaves poorly (e.g., leaks memory, writes to resources it does not own), other applications (or the operating system itself) can be disrupted and fail. Although Windows 2000 has much improved isolation over NT, it is still a very long way from the gold standard set by z/OS of 100% isolation.
Weak resource scheduling means it is impossible to ensure all applications receive a specified minimum share of resources. Without this specification, it is not possible to guarantee application service levels. Windows is particularly weak in this area, providing simple processor affinity through job objects.
Because of these limitations of poor isolation and weak resource scheduling, most ITOs run a separate Windows (and often Unix) server for each application, resulting in a large number of servers that are often underused. Reducing the number of servers can reduce the total hardware cost and the number of full-time equivalents needed for hardware management. By definition, these limitations are not an issue for same-workload consolidation, because the server only runs a single application.
Server Virtualization and Partitioning
One of the techniques used by mainframes to enforce isolation and resource management is virtualization. A virtual machine manager (VMM) or “hypervisor” creates a series of virtual environments that run on top of the hardware, giving the illusion that each virtual machine (VM) is a separate, unshared collection of resources.
Virtualization (or logical partitioning) should not be confused with physical partitioning (i.e., a server is physically partitioned at the hardware level into separate machines, each with its own CPU, memory, and I/O resources). In this configuration, the resources are not shared and each partition must have whole units of resources (e.g., whole number of CPUs). In virtualization, the virtual machine (VM - or logical partition) appears to be a separate system; however, it is mapped onto the hardware, and all resources are potentially shared. For example, a VM can be allocated for 20% of a CPU, or a VM can share an I/O controller bandwidth with another VM. Although partitions are simpler for vendors to implement, and potentially more robust, they do not provide the highly dynamic and fine-grained control over resources that is possible with VMs. The greater issue is that partitionable servers have a 2x-3x price premium over standard servers, which greatly reduces the opportunity for TCO reduction.
Although physical partitioning has recently been provided for top-tier Unix/RISC (HP, IBM, and Sun), true VMs are only just appearing (HP-UX Virtual Partitions, Solaris Containers due in 1Q03, and AIX LPARs in 1Q04). Although this technology is interesting and useful, it does not address the large number of underused Wintel servers that represent the low-hanging fruit for consolidation. As Unix recedes to high-end, low-volume, niche-platform status by 2005/06, this problem will become more acute.
Intel Virtual Machines
VMware, Connectix, and SWsoft all provide some level of virtualization for Intel hardware. Each addresses the issue of isolation and resource management for Wintel and provides manageability benefits (i.e., rapid provisioning and standard OS build, independent of the BIOS and drivers required for the physical hardware). VMware ESX is the only product that currently provides a native VM implementation on Intel servers. It runs its own virtual machine monitor natively on the hardware; the others, including VMware GSX, execute the virtual machine monitor under a “host” operating system (e.g., Windows, Linux). The weakness of the latter is the dependence on the host operating system for robust resource scheduling and the potential for the host OS to become corrupted. Because the native virtual machine monitor is a much smaller and simpler kernel than Windows, it is easier to ensure robustness and efficiency.
VMware has established itself as the leader in Intel server virtualization, by being first to market with a native Intel VMM and by establishing OEM relationships with the key Intel server vendors (IBM, HP, Dell, Fujitsu Siemens). VMware has been shipping a Windows-hosted virtual machine monitor for more than two years. The native implementation (ESX) provides greater efficiency and greater control over resource scheduling (i.e., better workload management) using the hosted implementation. In both cases, each VM is currently limited to a maximum of one CPU, which limits the types of applications it can be used for. This limitation is expected to be removed in the next release (1H03).
Connectix has shipped Intel emulators for some time, but its server VM is currently only in beta testing, and it does not provide a native implementation. This is comparable to VMware’s GSX product, which has been shipping for two years. It is also limited to a single CPU. Without support from the leading Intel hardware vendors, Connectix will struggle to establish itself in the market and remain a distant second to VMware. SWsoft provides a different virtualization technology it calls “virtual environments.” A single operating system is virtualized into multiple independent environments and is part of a hosting solution.
Business Impact: Virtualization of Intel hardware is another important step toward building an adaptive infrastructure that can respond quickly to business needs.
Bottom Line: IT organizations that have completed the first phases of unification (co-location, storage consolidation, and some workload consolidation) should investigate use of Intel virtual machines for consolidation of small, non-mission-critical workloads onto 4- and 8-way Intel servers.