Virtualization, old and new

tools and ideas, like virtualization and capacity management, that evolved to fit that earlier world, simply don't apply in ours
Written by Paul Murphy, Contributor

There are two very different kinds of virtualization making headlines these days. The second one, resource virtualization for management purposes, seems wholly laudable. Whether used to manage storage, processing, or networking a virtual system constructed as kind of unified console for two or more pieces of real hardware can reduce administrative errors, provide unified logging for accountability, and make it easier to shift resources where they're needed as user applications needs change.

Breaking up a single box to manage the resources available to individual processes is generally a costly solution to a problem set we don't have anymore.

The earlier, and more common, kind of virtualization does the opposite: splitting one piece of real hardware into numerous virtual ones, each of which is then then managed separately and each of which is more resource constrained than the original machine.

This technology is really about workload consolidation as a means of driving machine utilization up and has roots that go back a very long way - at least as far as the origins of data processing in the 1920s. In those days, electro-mechanical tabulators, really card sorting, cutting, and labelling machines, were replacing the previous, purely mechanical, generation and adding fancy new functions such as the ability to type human readable labels onto punch cards or, more to the point, automatically re-zero counters to switch between multiple card decks during processing.

The IBM Series 30X introduced in 1927/8 was an expensive link in a chain of electro-mechanical processors that together got the job done. At 100 cards per minute, however, it was significantly faster than most of the gear already in place and therefore likely to be underutilized unless shared between two or more lines. The enabling technology for that was an IBM feature enabling multiple batches to be loaded at once but run separately -thereby reducing downtime and enabling the machine to be shared between two or more lines just as if the customer had two real, but slower, machines.

Almost eighty years later that same logic still drives the same decisions in the same processes: but now implemented using technologies like LPAR (logical partitions), VM ghosts (guest operating systems), VMware, and Xen in place of card shoes.

In the 1920s people were inventing this stuff, and one over riding idea that has lasted was simply that machine utilization percentages are an important measure of data processing management's competence. As a result two specialties developed within the profession: job scheduling and capacity planning. Taken together these two specialties ensured that the capacity on hand would be just sufficient to get all jobs done - but only if the people involved maintained extremely high equipment utilization rates.

All of this made perfect sense in the early data processing environment because the gear was expensive and users were disconnected from processing by the report printing and distribution processes - in other words, delaying some jobs to run at night allowed higher machine utilization without impacting users because the users relied on printed reports and didn't care when those were run off.

In the mainframe world you still see a lot of this - a dual z9 sysplex still costs around $0.20 per CPU minute to operate, the original job scheduling has been largely automated but still works just as it did then, error management in capacity planning has become largely a matter of dynamically adjusted hardware and software licensing but hasn't changed in purpose or principle, and other bits and pieces have been automated or otherwise morphed, but the fundamental methods and concerns still apply.

They don't apply in computing. The cost of a CPU minute in a Unix or PC environment is now well below a penny, and users want their results on demand - effectively restricting processing time to the 25% of the week during which they're at work and thus leaving systems functionally idle 75% of the time.

As a result both the financial and user issues are completely different, utilization therefore a fundamentally inappropriate measure of success, and methods used to increase utilization therefore often counter-productive.

Suppose, for example, that you have a dedicated e-mail server with 500 users each of whom costs the company an average of $25 per hour and each of whom waits an average of 30 seconds, twice a day, to load email from the server to the client. If that server is a dual Xeon or small SPARC or AMD machine it will look about 95% under utilized and be a tempting target for consolidation. In fact, however, your company is likely to be better off upgrading it to a machine that will look perhaps 99% under utilized if doing that can reduce user wait times to an average of ten seconds.

Notice that everything about this argument is wrong from a data processing perspective; but here's the arithmetic: having 500 users waiting one minute per day wastes 8.3 user hours per day. At $25 per hour this comes to $208 per day, $56,000 per 270 working day year. Reducing wait times to ten seconds on each load will therefore be worth about $37,000 - and the upgrade may cost you no more than $6,000 for a clear $31,000 in claimable productivity gains

You won't get the cash in your IT budget, and whether your company sees a real benefit will depend on user behavior, but the general logic is simply that the shift to interactive user services puts the premium on getting the service done - and means that the 1% of the time the machine's actually in use, it's too slow.

So what's the bottom line here? Originally machine time was expensive and disconnected from user productivity. Now machine time is cheap, and closely tied to user productivity. Thus tools and ideas, like virtualization and capacity management, that evolved to fit that earlier world, simply don't apply in ours.

But, there's another side to this story too - and it's the better solution side. Having a computer, no matter how cheap, burning power while merely on standby seems wasteful, and this apparent waste powers the argument for consolidation - and therefore for the older kind of virtualization.

The underlying question for consolidation, however, shouldn't be whether the target machine is under utilized, but whether users can be better served through consolidation. In general the answer to that depends on the service request arrival and response patterns - the more random the request arrival rate and the more CPUs that can be usefully applied to response generation, the better consolidation looks.

I'm a firm believer, for example, in the use of smart display technology to get PCs off desktops and centralize processing - and not because this increases system wide utilization, but because doing it raises service levels for users. To re-consider the email example above: a Sun 890 can load 1000 emails for each of 500 users in an average user time of perhaps two to three seconds - because the Solaris scheduler can easily handle all 500 requests even if they all arrive in the same 90 second interval.

Indeed you can think of the basic Unix round robin scheduler, in any of its Linux, BSD, or Solaris incarnations, as the ultimate virtualization machine: infinitely flexible, low in overhead, and free of any management complexities.

You can virtualize, containerize, or zone services like this on BSD or Solaris - but notice that anything you do to constrain resource availability within such a compartment reduces the probability that a user request will get immediate service, imposes more overheads than simply letting the scheduler do its job, and increases average user response time.

Imagine, for example, that you have a choice between meeting user needs with a maxed out z9 mainframe partitioned into 60 logical machines, each of which runs 100 zVM ghosts (i.e. 6,000 virtual computers) or using four racks with a total of 64 real p550Q machines and the modern kind of virtualization software: something that makes all 64 machines manageable from a single console.

With the mainframe your 6,000 virtual machines will each get an average of 58Mhz (= 54 x 1.65E9/6E3) - for about $22 million. With the rack mount p550s you have 64 independent machines each running 93 processes at an average of about 163Mhz - for under 2.5 million.

In the mainframe case each Linux application has access to all the resources within the LPAR - meaning that if only 1% of the applications are running, each will get an average of about 1.4Ghz. In the rack mount case, each application has access to the resources of one P550 - meaning up to about 15Ghz. In the mainframe case the resources available to the application are limited by management edict and set in stone (at least during the batch run) at the LPAR level - in the rack mount case the local scheduler's limit is set by the eight core box but grid style virtualization  software can bring more resources to bear if needed.

So what's the ultimate bottom line? Virtualization in the old sense of breaking up a single box to manage the resources available to individual processes is generally a costly solution to a problem set we don't have anymore. In contrast, virtualization in the modern sense of machine abstraction and management has the opposite effect: handing detailed resource decisions to the default scheduler and allowing management to make additional resources available on demand.

Editorial standards