X
Tech

Intel Server Virtualization: Part 2 - Picking the Low-Hanging Fruit

Due to the large number of significantly underutilized Intel servers (
Written by Kevin McIsaac, Contributor

META Trend: With distributed n-tier (DBMS, application, Web) server architectures standardizing on Intel, proprietary Unix (Solaris, HP-UX, AIX) will recede to high-end, low-unit-volume, legacy-platform status by 2005/06, displaced by OSs designed for Intel economics: Windows and Linux. Linux will rapidly mature and gain momentum as an ISV reference platform, moving beyond high-volume Web, technical computing, and appliance server environments into mainstream application and DBMS server roles by 2004/05. Linux server growth will initially be at the expense of Unix (2003/04), but will eventually vie for dominance with Windows (2005/06).

Through 2004/05, the failure of many server consolidation projects (based on high-end hardware) will become apparent as pioneers reach the end of the four- to five-year hardware life cycle, without substantially reducing total cost of ownership (TCO). The primary source of failure is that consolidation platforms are not linearly priced compared to individual platforms (i.e., a high-end, >8 way, partitionable server can be 2x the hardware cost of a collection of small servers). As a result, through 2003/04 consolidation efforts should be refocused around commodity-priced 4- and 8-way servers.

Though 2003, ITOs will continue to investigate consolidation of same workloads (e.g., many Exchange servers into a few) to increase server utilization and reduce hardware spending. Usually this makes sense only if the existing hardware needs to be replaced (e.g., maintenance costs increase, servers cannot support the load) or the replaced hardware can be reused through 2004/05, as commodity Intel-based servers displace Unix/RISC servers in the data center, ITOs will look to hard partitioning and virtualization (logical partitioning) to reduce Intel server proliferation. By 2006, ITOs will virtualize their Intel servers to reduce server count, but more important, to create an agile computing platform of lower complexity.

By 2007/08, virtual infrastructure will dramatically drive down the complexity (and, hence, cost) of provisioning and managing server farms. The fabric of this virtual infrastructure will be commodity (Intel) blade servers, networked storage, and distributed software infrastructure.

Picking the Low-Hanging Fruit
To reduce server infrastructure TCO through 2003/04, ITOs should focus on the low-hanging fruit (i.e., the large number of small, underutilized Intel servers) rather than the more complex and difficult to justify (though more interesting to technical staff) consolidation of large Unix/RISC servers.

Virtualization introduces new capabilities to commodity Intel servers that can be used to drive down complexity and hence total cost of ownership. The three major benefits of virtual machines (VMs) are the following:

  Loose coupling between the operating system/application and the hardware. The VM presents the same hardware interfaces to the operating system/application stack that is independent of actual hardware (system bus, BIOS, NIC, I/O controller, graphics card, etc.). An OS/application stack encapsulated in a VM can be moved among Intel servers without requiring a rebuild. This greatly reduces complexity (i.e., having only one hardware class) and improves agility (i.e., a VM can be moved to and run on any available server).
  Rapid (re)provisioning of servers. A new operating system/application stack encapsulated in a VM can be booted, suspended to disk, or shut down at will. By duplicating a VM, a new, fully configured server can be rapidly configured (e.g., a new instance of a Web server provisioned by an Internet service provider for each new customer).
  Logical partitioning of resources. This enables the robust sharing of a server by many operating systems/application instances. This can be used to increase utilization and drive down the total hardware expenditure without compromising robustness.

Testing Infrastructure
Although it is commonly thought that Intel-server-to-application ratios are 1:1 (e.g., one application for one server), once unit testing, system testing, and quality assurance (QA) servers are taken into account, the actual ratio is more like 4:1. A typical development organization requires separate, dedicated unit test, system test, and QA hardware for each application. Server virtualization enables separate VM images to be created (and stored on disk) for each application environment that can be run on a shared server when required. Because each VM is a separate and isolated environment, it can provide a robust testing environment.

Because most test systems are used irregularly and usually do not require significant resources, the number of test systems can be greatly reduced if the server hardware can be shared. That is, only VMs for the test environment required for immediate use are run, and they can be run simultaneously on a shared server. This means less hardware needs to be purchased, reducing the hardware component of the TCO.

Though the use of copy-on-write mechanisms (built into the VM or an external I/O subsystem), once testing is complete, changes to the environment can be discarded, leaving the VM image in the original state. This further reduces the TCO because the time taken to provision and manage test servers is reduced because the same VM image can be used over and over again during testing.

Although a VM can be used for QA testing, in many organizations this is not acceptable for all applications unless they are deployed into production on a VM. That is, best practice requires that acceptance testing be performed on the same hardware configuration as the production systems.

Production Infrastructure
Due to the current limitations of Intel VMs (i.e., one CPU per VM, Microsoft support, performance of I/O-intensive workloads, and licensing costs [see Figure 3]), it is either impossible or inadvisable to deploy some application classes (e.g., mission critical, large database) in a VM. ITOs must set strict guidelines about where VMs should and should not be used. Workloads that are suitable include the following:

  Consolidation of several small non-mission-critical workloads on to a shared server (e.g., commodity infrastructure service [e.g., DNS, DHCP], workgroup applications [e.g., Access Database]).
  Legacy Windows applications that are stranded on old hardware platforms because of hardware dependencies. VMware enables the OS application software stack to be moved to a VM on a new server by substituting drivers.
  Scale-up limited apps (e.g., Citrix) that saturate a single OS’s limited resources (Windows GDI) can be distributed across many partitions to make better use of newer, more powerful servers.
  Highly replicated, centralized applications (e.g., small Web servers at an Internet service provider) can be rapidly provisioned on shared hardware using a VM.

Business Impact: A low-cost, agile server infrastructure is an enabler of competitive business advantage.

Bottom Line: IT organizations should focus consolidation efforts on the lowest-hanging fruit and start with the large number of underutilized Intel servers.

Editorial standards