X
Business

Making sense of multicore pricing

Having multiple cores on one processor has its benefits, but simpler software licensing is not one of them
Written by Sally Whittle, Contributor

Multicore processors have been around since 2005, when Intel shipped its first dual-core processor.

Since then the company, and its competitors, have standardised on quad-core processors and will soon move to six-core processors. Intel has even demonstrated 80-core processors, although it is likely to be many years before these are commercially available.

Intel's figures show that increasing a processor's clock speed by 20 percent provides a 13 percent performance boost but a 73 percent jump in power consumption. If clock speed is dropped by 20 percent, there will be a 49 percent reduction in power consumption but only a 15 percent drop in performance. This illustrates the potential of dual- and quad-core processors, according to Willie Crowe, an enterprise solutions architect with Intel.

"The great advantage of multicore is that you get a large boost in performance in return for a relatively small boost in power consumption and clock speed," says Crowe. "With a six-core processor, you're getting six times the performance for less power and capacity."

Software vendors taken by surprise
However, although the new technology has its benefits, not least improved performance, the impact on the software industry and its customers has been disruptive in the extreme, argue analysts.

"Pricing for multicore processors is incredibly confusing, highly complex, and it's virtually impossible to compare like with like," says Clive Longbottom, a research director with Quocirca. "It's getting a little better but we still have a long way to go."

Longbottom claims software makers were simply unprepared for the changes that multicore processors would have on their business models. "When this stuff came out, the vendors were completely caught with their pants around their ankles," he says. "Nobody knew what they should be charging for customers who were running software on these new systems."

At the heart of the issue is how vendors should charge for software running on multicore processors. Traditionally, software users have paid a licence fee based on how many processors were used to run an application. So, when dual-core processors were announced, some software suppliers simply charged double for software running on dual-core systems.

This model was unpopular for a number of reasons. Firstly, vendors did not always agree on what constituted a processor. While Intel, AMD and Sun argued that a chip socket counted as a processor, IBM took a different approach, but only for certain platforms.

Customers, meanwhile, complained about paying for two processor licences when a dual-core processor did not perform twice as fast a single processor. The result, explains Longbottom, was the emergence of staggered pricing systems, which charged for processors on a sliding scale — for example, charging 100 percent for one processor, 180 percent for two processors and 210 percent for three. "The problem was that it made no sense to people, and the figures didn't tally with what users saw."

Charging by CPU also makes it difficult for IT managers to predict how IT budgets will change over time, adds Julie Giera, a vice president with Forrester Research. "You might buy an application to run on four cores and it makes financial sense, but, if you want to take advantage of new hardware, your application might become unaffordable running on six or eight cores," she says.

During 2006 and 2007, there was a lot of "thrashing" in the market, says Crowe, as vendors tried to work out pricing models that would make sense to customers. "I think we're in a situation where things are getting clearer and we now have a spectrum of pricing options," he says. "At one end, there are companies pricing per socket and, at the other end, are companies that are pricing by using some multiplication or combination of factors, such as millions of instructions per second (MIPS) or transactions at the other end."

That spectrum includes a range of pricing systems and models, with some vendors providing different pricing structures for different platforms, products and customers. Some vendors have built their pricing models around MIPS and how much power is available to the application. "The problem here, of course, is that it's often impossible to say what power is being used by what application, particularly in a grid or virtualised environment," Longbottom says. "Managers are paying over the odds because they don't know how many cores or what percentage of those cores are being used to run an application, and the figures could be dynamic."

With partitioning, a fraction of a processor might be responsible for running an application – but how should this be costed? Moreover, virtualisation is designed to be dynamic and scale up and down as needed. Do customers pay for maximum capacity, average capacity over time or something else entirely?

Choosing the best pricing model
As an IT manager, how can you begin to compare the various pricing models available and judge which will provide the best possible value? "It won't be easy and you'll need to do a lot of work during the specification of the architecture," warns Roy Illsley, a research director with Butler Group. "But it can be done."

The first step is to begin by asking what the business needs from a software application. How resilient does the application need to be? How many users will...

...access it, at what times, and how quickly does the system need to scale up or down?

This information should form the basis of a benchmarking exercise that will calculate the projected costs of running the application at the required level under a variety of architectures and with different application pricing models. "The key things to think about are how many processors you need, how they will be used, and what they will cost to run," says Crowe. "Different vendors may have different pricing for Itanium versus x86, so factor that into your calculations."

A major factor to take into account is whether you are likely to be consolidating servers and virtualising applications in future. In this environment, multicore processing can become even more complex.

With partitioning, a fraction of a processor can be dedicated to running a specific piece of software and, under some pricing schemes, customers may end up paying for a full processor. If you have a 16-way server and run 'X' on four single-core processors, will you pay for four or 16 processors? Virtualisation is the key issue being faced by the industry, says Crowe: "If you're running 20 applications on a single server, you need to ensure you're not running anything in an unlicensed model. That means deploying management tools that will report back utilisation in a way that IT managers can ensure appropriate licensing governance."

There are an increasing number of reporting tools available to monitor exactly what resources are dedicated to specific applications in a multicore processor — some supplied by software vendors, others by third parties. Crowe recommends investigating the options and using reporting tools to see how applications run in a given environment before making major new investments.

Next, consider whether the applications you want to run are designed for a multicore environment. The reality is that many enterprise applications cannot support the multi-threading needed to perform at speed in a multicore environment, says Illsley. "If it doesn't support multi-threading, you won't see any great improvement on a multicore processor, and you might end up increasing your licensing costs for no reason at all," he says. "In fact, if you're running the wrong application on a multicore, virtualised machine, you may even see performance get slower. So always be sure to ask whether applications have been written for this environment, and ask to see benchmarking data."

Getting enterprise software to run in a multicore environment is a major undertaking, argues Simon Garland, chief strategist of Kx Systems, a financial-software supplier. "It's not like you can just sprinkle some magic multicore fairy dust over a database-management application and suddenly it runs twice as fast," he says. "You're talking about completely rewriting hundreds of man hours, and that's not something vendors can do overnight."

Kx prices its software by core, but Jones admits this isn't always easy. "They might be running on a machine with 16 cores, but want to license four — that's not easy. With Linux, for example, the OS [operating system] is forced onto core zero, so we might end up putting you on a core that's already being used. There are cores and cores. Do we want four CPUs on one core, or one CPU on each of four different cores? How will this affect utilisation and load across the hardware?"

Once a good idea is obtained of what application is required, what performance is demanded (and possible), and the likely costs of buying the software on different processor architectures, Illsley advises that IT managers do some number-crunching. "Think about what this application will cost in terms of transactions per watt," he says. "Given the fact that power and cooling are the major factor in today's datacentre, enterprises should be making this the basis of purchasing decisions."

Opportunity amid the confusion
The current confusion around pricing for multicore systems presents large IT departments with something of an opportunity, adds Longbottom. Enterprises are increasingly ignoring the list price and simply approaching vendors with a figure of what they want to spend, he says. "If it's reasonable expenditure, then vendors will negotiate and come to some agreement; they're not using the list price, because there's no reality there."

Smaller businesses can't expect the same degree of flexibility, so Longbottom advises them to simply bypass the issue and opt for subscription-based software licences, where users pay per seat. "I think it's likely to become increasingly popular, particularly where software is hosted internally but paid for by the seat," he says.

The market is still evolving and vendors are learning as they go along, Longbottom adds. "When Oracle brought out 10g and priced it per core, the market basically said that was stupid, and Oracle changed the pricing model," he says. "Software vendors are ultimately pragmatists, and they'll do what works for customers in the end."

However, there's no getting around the fact that pricing issues are messy and will remain so for a couple more years at least, adds Giera. "Until customers have experience of buying systems under these models, it's very difficult to judge how it will affect their spending. I think vendors with the simplest pricing models will definitely gain an advantage."

Editorial standards