X
Tech

Utility computing: What killed HP's UDC?

Dan Farber: Once, HP Utility Data Center was proof of the company's leadership in the emerging on-demand, adaptive computing, pay-as-you-go world. Today, the hardware and software combo is history.
Written by Dan Farber, Inactive
COMMENTARY -- One day, Hewlett-Packard's Utility Data Center (UDC) is proof of the company's leadership in the emerging on-demand, adaptive computing, pay-as-you-go world. Today, the hardware and software combo -- which allowed large scale, heterogeneous IT infrastructure (servers, storages, networks, security devices, etc.) to be aggregated into a dynamically delivered resource pool for applications -- is a footnote in computing history.

According to HP's press materials, enterprises could achieve incredible cost savings, such as 30- to 80 percent reduction in deployments costs, 20- to 30 percent in security costs, 80- to 100 percent in management costs, 20- to 40 percent in upgrading and migration, and 5- to 40 percent in capacity planning costs.

So, why kill such a highly touted offering? I asked that question to Nick van der Zweep, director of virtualization and utility computing at HP. "We brought out the UDC in November of 2001 and got a lot of feedback since then. It's a big chunk to bite off -- it takes over CPUs, storage, networks. We found that people wanted the piece parts, such server virtualization or automated provisioning, which can be snapped together on modular platforms."

In other words, cost and complexity -- which the UDC was supposed to remedy -- led to a rethinking of the UDC platform. The mainframe-style big chunk was at least a million dollar investment, including a rack full of HP-UX and Windows multiprocessor systems, a VLAN-compatible switch, an Oracle database and other components. HP managed to get only a handful of customers -- such as Amadeus, Ericsson, Phillips Semiconductor, DreamWorks SKG and Proctor & Gamble -- over the last three years. The customers mostly used the UDC as a managed hosting service, run by HP with HP equipment, avoiding the challenge of implementing the UDC across legacy systems. For example, DreamWorks SKG tapped into excess capacity in an HP-managed data center to render frames for Shrek 2 and other animated features on a cost-per-frame basis. Amadeus, whose software is used to book 95 percent of the world's scheduled airline seats, taps into HP's UDC service when its own systems are maxed out.

Basically, the UDC promise of greatly reducing the cost and complexity of heterogeneous IT infrastructure within corporate data centers turned out to be an illusion. The reality is that the UDC or any other utility architecture isn't going to turn a mishmash of legacy systems into friction-free, lower-cost IT infrastructure without huge (and non-finite) consulting fees and bailing wire. "Every customer we talk to about utility computing ultimately wants to apply it to a heterogeneous environment, but the UDC is too big -- they want to do it in modular fashion, on a project-by-project basis with more homogeneous components," van der Zweep said. "Getting all the data centers communicating with each other and standard billing systems worldwide is a ways off in the commercial space."

Sun's CTO Greg Papadopoulos doesn't buy into the heterogeneous utility computing vision. "You can improve legacy systems with consolidation, for example, but it's unreasonable to make legacy systems work on demand computing or automated provisioning. It will take more work than doing a new architecture; if you have new applications with high demands, then new systems would be a better investment."

Now, HP is adapting the UDC software components around modular computing, such as blade servers. It's a pay–for-what-you-want instead of pay-for-what-you-use model. Instead of $1,000,000, the entry point is $500 to $1,000 to get started with a scaled down UDC. The HP BladeSystem, for example, includes the HP Virtual Server Environment (which can manage VMware and Microsoft Virtual Server 2005 virtual machines) and HP OpenView Change and Configuration management.

Today, utility computing is mostly managed services using homogeneous systems, not the taming and synchronization of heterogeneous systems through service-oriented architectures, Web services and grids. IBM has been selling compute cycles to customers from IBM-centric server farms. This month, Sun unveiled a pay-as-you-go service, N1 Grid Service, that allows customers to run some computing jobs on Sun equipment for a cost of $1 per processor/storage/memory per hour. The service is powered by Sun Fire servers based on AMD's Opteron processor, and SPARC-based utility pricing is in development.

Jonathan Schwartz, Sun president and COO, describes his company's utility pricing scheme as equivalent to the first phone "calling plan" and wants to incite a price war for compute resources. "We're engaged with a number of CIOs who've asked their teams to benchmark their internal compute grids against $1/cpu/hr. All in, all up, at least there's now a benchmark. If they buy from us, they can simply turn the bill over to their internal clients," says Schwartz in his blog.

The war talk is more Sun rhetoric than an imminent combat zone, but increasingly enterprises will want to tap into excess capacity from service providers. Without standards for pricing, comparisons among vendor offerings will depend more on metrics such as HP's cost per frame rendered than on how many processors you need to fire up for a task.

You can write to me at dan.farber@cnet.com. If you're looking for my commentaries on other IT topics, check out my blog Between the Lines or my column archives.

Editorial standards