Personally, I love apple pie - but it's a bit like talking about global temperature averages: everybody knows what these terms mean, but it's highly unlikely that two people talking share a common understanding of "apple pie", "global average temperatures" or "virtualization."
Some time ago frequent contributor John Cawley and I got into a discussion of virtualization in which I said that the technology is generally worse than valueless and he disagreed.
As it turned out, we were mostly disagreeing on definitions - here's the exchange so far:
Debate: Big Iron And Virtualization
A guest blog by John Cawley (first draft)
Some technologies are debated endlessly. Virtualization and partitioning, for example, will never lack for arguments pro or con. For every advocate, a dire skeptic. So it is now; my name is John Cawley, and I'm a guest writer debating Paul Murphy.
Here's the argument: is there a case for big iron and virtualization? Or, as Paul claims, none really; everything that needs to be done with virtualization can be accomplished with smaller, cheaper systems.
I'll make my argument; he'll make his. You decide who's right.
IBM, as a case in point, practically makes partitioning the raisons d'etre of the p595 and zSeries. They justify the economics of more expensive hardware with it, and it's actually an IBM example that got this debate going.
Along comes Murphy, who (amongst others) points to some marketing slight of hand by IBM on the economics of mixing System z10 and virtualization/partitioning technologies.
It goes something like this: IBM stretches forth its hand and declares, Drink Our Cool Aid: "The z10 also is the equivalent of nearly 1,500 x86 servers." Very impressive. When finally asked how the numbers were derived, however, IBM's answer leaves you incredulous. The math is cooked, and that's putting it charitably. Again, Paul:
The 2008 Midas Memorial Award for chutzpah in marketing goes to IBM for its claim that a single z10 server (at twenty-six million dollars before storage and licensing) can replace 1,500 Sun Fire X2100 M2 (dual core Opteron) servers ($1.8 million including storage and licensing) if the 1,500 Opterons are configured as:
- 125 Production machines running 70 percent busy
- 125 Backup machines running idle ready for active failover in case a production machine fails
- 1250 machines for test, development and quality assurance, running at 5 percent average utilization."
Twelve hundred and fifty machines at 5% utilization? From impressive to a steaming pile of marketing fud in no seconds flat. Most of us will agree on this, and most will also agree that IBM, as a market leader, should have higher standards and recognize fantasy benchmarking for what it is. It has the whiff of desperation to it.
To be fair, let's also give IBM their due: they don't need to resort to such measures, they have some fine products that stand on their own. Their market presence is excellent, their hardware in the midrange and high-end generally first rate. AIX is ok: stable, if a bit dull and a Solaris wannabe. The cluster solution, HACMP, is...well, let's not talk about HACMP.
I'll grant this one to Paul, no contest--IBM's claim is nonsense. That said, it shouldn't detract from the reality that high-end computing has a play with virtualization. Solutions like the zSeries or a Sun E25K can be effective and appropriate combinations, and broad-brush statements to the contrary are off the mark.
I can guess Paul's argument and paraphrase it for him: ridicule IBM's mendacity, and show that the price/performance of a competing Sun solution is X times better, based on factors Y and Z. And my answer to that is: fine, whatever.
Am I ceding the argument to Paul? No. Are there any conditions where it a case for mixing high end iron and virtualization make sense? Yes, there are.
- The economics of a system are frequently larger than the hardware cost.
Take the database/ERP segment as an example, as it's one that I'm familiar with. This is a database driven, application server environment--a place where big iron is common. In these places, it not uncommon for there to be a system or set of systems that no one wants to touch. Many Unix admins know it well: it's the "critical" system, when critical means the cost of downtime exceeds your salary measured in decades. It's the Siebel/SAP/Peoplesoft system that stops Latin American production when it goes down. Or transport trucks stop all over Europe because they can't print their border papers, or your global financial Exchange quits (hello Toronto), and so forth.
The beauty of ERP is that it runs the company; the problem with ERP is that it runs the company.
When those kinds of systems go down, they go with a vengeance. In banking and finance., oil and gas exploration, manufacturing, insurance, government, air traffic control, global communication networks to name a few.
The point I'm making is that when the cost of downtime is measured in the millions or tens of millions of dollars per hour, the purchase price of hardware is often a small factor. It has to stay up.
- It's called Enterprise hardware for a reasonIf you're familiar with Sun's server lineup you learn there is a qualitative difference between their workgroup and mid-range (and up) computing solutions. That V490 may have the same CPU speed as the more expensive Enterprise 6900, but you can't replace DIMMs or CPUs without downtime. There are subtler things you learn over time, too. Root cause analysis is commonly better on the bigger systems, as is how well the systems react to component failure. Factors that in sum add up to make a noticeable difference over time.
- Virtualization and big iron solve application problems.Some issues neatly resolved with isolation in a Solaris zone, for instance. Migrating/moving SAP instances with a dual ABAP/Java stack is a case in point. It's problematic. But put it in a zone, and you can just move the zone.
SAP's memory requirements, moreover, are infamous, and therefore better served by higher-end systems. So when predicting capacity requirements you try to host instances that will shortly require 64 GB or 128 GB of memory each, high-end systems are there. When you want to cap the CPU and memory use of such instances, stipulating that in the zone config is a simple process. Virtualization can wall off applications at the software layer, which is commonly cheaper than doing it all in hardware.
- Consolidation --> Centralization of Management --> More UptimeShow me a p5 510 or a T1000 that sports 596 GB of memory. They don't exist--yet. Generally speaking, big iron facilitates consolidation. Any junior sys admin knows that centralization of data center management is a key concept. It's why technologies like NIS were invented. Consolidation is not only a management aim, but it has a ripple effect in the data center. If you can standardize and consolidate in servers, it makes centralization of complementary things like storage, licensing, performance reporting, monitoring and disaster recovery practices easier.
The argument against big iron is frequently cost. Without discounting it, cost is a relative term--only one factor amongst many that determine the economics and viability of a computing solution. It may not be the most common play, but it has a role in the market.
And here's my initial response:
umm..? I just about agree with just about everything here because you don't make any kind of case for partitioning/ghosting. In fact you make the contrary case: that for using containers (zones + common resource scheduling) to isolate key applications from idiots (sorry, I mean failures; no, I mean bosses; uh, no, wait, hardware; no, I mean ..you know - that guy.)
I'm strongly in favor of using containers - but these require neither partitioning nor ghosting (VM: using one OS to run others as applications) ..so where's the argument here?
And the applicable part of his response:
Ok, some points taken, but remember, the context came from your assertion (or so I read it) that using big iron with virtualization / partitioning technologies made no sense. That's a broad statement.
If I may, we're both making assumptions here. I'm lumping WPARs and Zones together; you're distinguishing by technological implementation ("VM:using one OS to run others as applications"). I thought your contention was that anything you can do with big iron and virtualization (zones,partitioning,whatever), you can do better and more cheaply with small systems and virtualization.
And most of my response:
about what I say: it's that anything you can do with ghosting on "big iron" can be done as well, and for less, with smaller machines. i.e. breaking up the memory available on a 25K to make ten 880s makes no sense at all.
However, using containers (an extension on the everyone/group/user originally from the trusted communities setup on Solaris 8 and not virtualization in the PC/IBM sense at all) makes perfect sense because you use the standard scheduler to handle resource allocation thus making the entire machine available to users when/as they need it. (see http://blogs.zdnet.com/Murphy?p=295 and 307)
oh, and using scripts to move a containerized App? from the N1 purchase and built on top of the sol8T communities extension - just now being sold as what's selling: "virtualization."
My summary (John's is below) is that we're not disagreeing here, just working out our definitions of "virtualization." - John?
There's a saying, "If writers wrote as carelessly as some people talk, then adhasdh asdglaseuyt[bn[pasdlgkhasdfasdf." I'd argue that blogging is closer to talking than writing for most people. Compare that to two people in the same field, and see how easy it still was to miscommunicate, or perhaps "miscommunicate precisely." I like the solution, however: change the definitions (rules) and agree in the middle. So ends the debate.