X
Tech

Why HP might be your next utility company

Every big IT vendor is pushing a pay-as-you-go service plan. Although HP's version currently is limited to its SuperDome servers, the company sees a time when all computer resources -CPU, storage, networking --- are connected to a grid, with customers bil
Written by David Berlind, Inactive

In the world of utility computing---where compute power is made available on-demand much the same way we get our electricity---the hyperbole among solutions providers vying for the spotlight has reached critical mass. HP, IBM, Sun and others are offering pay-as-you-go service plans, charging for compute cycles as though they were electricity.

HP is taking an approach that grows out of its wholly owned financial services subsidiary, charging by MIPS (millions of instructions per second-- the basic units of raw processing power) the way a power utility would charge by the kilowatt-hour. Leading the way is Irv Rothman, the president and CEO of HP Financial Services.

Traditionally, subsidiaries like HP Financial Services have provided buyers with financing options when it comes to acquiring or leasing expensive information technology assets. It's not unlike what CAT Financial does for buyers of Caterpillar construction equipment, or what GMAC Financial Services does for buyers of General Motors vehicles. So, it's only fitting that any new financial framework that helps technology buyers manage the total cost of technology ownership should fall within Rothman's jurisdiction.

The latest such framework to come out of HP Financial Services is what Rothman refers to as a pay-per-use model. Although not exactly an electricity model, HP's pay-per-use replaces a typical lease and allows IT departments to avoid having to buy as much "system" as necessary to accommodate peak loads.

According to Rothman, the fundamentals of the pay-per-use program are straightforward. First, it's only available for HP's Unix-based SuperDome servers, and the charges are based on percentage of CPU utilization. The equipment is installed behind the customer's firewall, and the customer is required to guarantee a minimum payment based on 25 percent CPU utilization on a 24/7 uptime basis. Price structure is different for every contract because it's dependent on the amount of money that HP Financial Services must borrow to configure the system, and the interest rate at which that money is acquired.

More compelling to me, however, is HP's vision for utility computing, and whether the company has plans to extend the idea to other services, other operating systems, and other business models. The answer is yes, yes and yes, and here's where it starts to get interesting.

Rothman noted that HP was already in the pay-per-use storage business. A natural next step for utility-based pricing, according to Rothman, would be into other areas of business where HP has a significant market footprint. "Right now," said Rothman, "we're looking at offering [pay-per-use] on commercial imaging and printing."

I asked Rothman if utility pricing would be available for other operating systems like Windows or Linux, and if he envisioned a time when CPU capacity would be available on a pay-per-use model over the Internet. Rothman referred me to HP's director of utility computing Nick van der Zweep, who introduced me to a new acronym-ICOD (for Instant Capacity on Demand).

"Our vision is that some day, all computer resources - the CPU, storage, networking --- will be connected to a fabric or grid and people will be billed for it on a usage basis," said van der Zweep. "You might have your own data center, but if you don't have enough resources, you could get them from next door or somewhere on the other side of the world. The more you use, the more you pay."

In a world like that, van der Zweep said, the units of measurement might be transactions. For example, if your SAP system runs out of gas during the holiday season, you can satisfy that thirst by buying some processing power from an SAP-empowered grid that bills you by the number of transactions it handles for you. Or perhaps the grid bills you by the number of e-mails sent and received.

According to van der Zweep, you wouldn't even need your own data center. "Get a box, put it in your data center, and plug it in," he said. "Or, use a box in our data center. Take all the resources normally found in a datacenter and turn it into a pool of resources that others can share. This will become the new outsourcing model."

Almost a year ago, in "MIPS becoming the next commodity", I envisioned a world where the abstracting layer of APIs would be processor-agnostic in the same way that the Java Virtual Machine is agnostic to the operating system. Any processor, or pool of processors, could service an on-demand request, and some processors would be able to deliver more capacity at lower prices than others.

In the HP scheme of things, IA-64 is the unifying architecture on which all HP-supported operating systems will one day run. This includes OpenVMS, HP-UX, NSK (Non-Stop), Windows, and Linux.

I asked van der Zweep if application-specific grids were the end all be all, or if perhaps, through something like Web services, the processor could be abstracted into a layer of network-based processor-on-demand APIs. That's when an interesting piece of HP's grander scheme rose to the surface of our discussion.

Van der Zweep pointed out that people still care about what operating system the applications run on because--with the exception of some applications that run on a Java Virtual Machine--most software is compiled to run on a specific operating system.

But, if all of these operating systems are running on IA-64, then something like a single SuperDome server could be partitioned physically or virtually into a system running separate instances of any one of those operating systems. In such a scenario, said van der Zweep, the utility concept can dive below the application layer, and start requesting capacity at the OS-level. For example, if your Oracle database is running on Linux and needs more juice, it can reach out to another Linux capacity provider and get it.

If this sounds like pie-in-the-sky stuff, HP certainly isn't seeing it that way. "Already," said van der Zweep, "we've demonstrated HP-UX, Linux, and Windows running in separate hardware partitions on a single SuperDome server. But soon, we'll have software partitions and we'll be able to split one cycle to this partition, and 10 cycles to that one."

Van der Zweep envisions a day when people get all their compute cycles from one super data center and get their storage from another. "Our vision includes intelligent provisioning, where an entire copy of a database can be migrated to the location that provides the cheapest batch processing at whatever time of day," said van der Zweep. "The database could be running in San Francisco while the storage is in London, and economics dictate where you outsource to at any given time." It sounds similar to the way voice services get billed at peak and non-peak hours.

There are hurdles, van der Zweep admits. "Network latency is an issue. But we're working on that too."

I've known for quite some time that IA-64 was the strategic platform to which all of HP was migrating. But it's only now that I'm starting to get a clearer picture of the company-wide roadmap. Whether HP gets there remains to be seen. But the vision seems sound.

Does the vision seem sound to you? Use TalkBack to let your fellow ZDNet readers know what you think. Or write to me at david.berlind@cnet.com. If you're looking for my commentaries on other IT topics, check the archives.

Editorial standards