AIX beats Solaris!

AIX beats Solaris!

Summary: readers should understandthat the larger the margin this report awards IBM on any one question, the greater the marginby which Sun actually won it

Today's headlie (see my Dec 19th blog for the rules on these) is a three word summary of a widely reported Unix user survey and analysis by the Gabriel Consulting Group.

Here's a typical bit:

HP and IBM basically tie on three of the above criteria - Operating System Quality, System Management Suite and Partitioning/Virtualization Features. IBM notches a clear victory in Operating System Features. While IBM and HP battle it out, Sun isn't much of a factor.

And here's the real summary:

Sun had the worst showing in the survey. They were third in almost every category, often a distant third. These results are important as they signal that Sun's immense installed base may be vulnerable to predations from their better-funded competitors. (Emphasis added)

So how did results like these come about? Well, the study gives a bit of information about its respondents, goes into detail on the weighting scheme used, and generally gives the impression of having been fairly done. Here's a sample:

We asked 197 enterprise UNIX customers about their experiences and their views of the three major UNIX vendors. Our survey was aimed at actual data center personnel, data center managers, system architects, etc., rather than those at the CIO and CFO level. We have found that people on the data center floor have a much better idea of what works well (and not so well) in their infrastructure and are generally not shy about expressing their views - both positive and negative


80% of surveyed data centers have servers from two or more vendors, with almost half of the survey population running servers from all three major manufacturers.

Unfortunately the report does not describe the survey's response in terms of either real numbers or percentages. Instead almost everything is couched in terms of a metric the authors call a Vendor Preference Index or VPI. Here's the rationale:

Since there are only three major UNIX vendors (Sun, HP, IBM), and Sun has by far the largest installed base, we normalize the data so that no vendor is advantaged (or disadvantaged) by the sheer size of their installed base. (Empahsis added)

A VPI is calculated as the number of "votes" a vendor gets on a question divided by the number of respondents who had claimed a primary corporate commitment to that vendor's Unix.

Imagine, therefore, that we have this situation:

 Number of respondents mainly using this vendor"Votes" for this vendor on question VPI

According to the report these results mean that IBM clobbered the competition with Sun placing a distant last.

In reality, however, 61% of the respondents prefered Sun, 22% prefered HP, and only 17% percent actually voted for IBM - meaning that Sun clobbered the competition with IBM placing a distant third.

What's going on here is that the VPI calculation produces a ratio that looks like a percentage but isn't one because the two quantities: the number expressing a preference for the vendor on some question, and the number of respondents who cite that vender as their primary supplier, are only indirectly related.

The 0.85 VPI score calculated above, for example, does not mean that 15% of Sun's users voted for someone else; in fact, it's not possible to ascribe a precise meaning to it at all.

What should be obvious, however, is that the VPI measure can be used to cook the books on a user survey simply by selecting a plurality of respondents from among those describing themselves as predominantly, but not solely, in the camp you want to damn and consequently as few as possible from the camp you want to praise.

What happens when you do that is driven from the fact that people generally want to to see themselves as fair: meaning that someone running Sun and IBM servers in the same data center isn't usually going to answer "Sun" to twenty or more questions in a row. So if 70% of the respondents classify themselves as primarily Sun users and 15% of them mutter "IBM" in response to at least one question, the VPI calculation will make Sun look like a loser and IBM look like a winner.

Oddly, the less real information there is in those "IBM" responses, the more nearly random the distribution of their "IBM" answers to the questions will be: meaning that the less representative the responses actually are, the broader IBM's victory will appear to be.

In other words, this is actually a very cool piece of work - but readers should understand that the larger the margin this report awards IBM on any one question, the greater the margin by which Sun actually won it.

Topic: IBM

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • we have a line for this.....

    ... Epitula non erubescit

    (warrents are patient, sorry for the translation but the online dictionaries couldn't find the proper translation).

    As always it's a point of view thing, but if youre doing this kind of thing with business economics, it's called Fraud.
    Arnout Groen
  • Thanks!

    Thanks, Murph.

    You should send your observations to The and other publication that didn't look as closely.

  • AIX / HP-UX beat Solaris?

    We've been using AIX 5L for some time now and although it is well rounded in terms of tools and other utilites, MORE OFTEN than it SHOULD has some VERY big defects. Case in point JFS2 file systems defects that can crash the kernel w/o warnings or bring the system to a complete halt. AT least three times we had this issue after installing the latest patches / maintenance levels. Another asinine issue: JFS2 hsa NO quota on AIX 5.1 and 5.2! JFS2 is their high-end FS designed for throughput, scalability, heavy-duty usage, etc. Why NO quota?

    As for HP, I stoped using it at aroubd the 11.00 release. It seemed to me clunky, awkward and lacking scheduling features, especially on Pthreads. I am sure all these issues must have been addressed by now.

    I've used Solaris 2.5.1, 2.6, 2.7, 2.8 and now 10. It has the standard quality control issues found in any large OS, but I never had any system crashing for ANY application or system part malfunctioning.

    I would think that Solaris 10 beats their competitors in terms of manageability, stability and robustness. I can see though that if HP and IBM fund better their OS teams, they could offer more sys. management utilities.

    Too bad that the OS has become an "unimportant" part in the whole game...

    • umm.. did you miss the point?

      The headline is a "headlie" true (because that's the implication suggested by the GCG report) but actually a lie because that report relies on a misleading indicator.

      Having used both, I have to agree with you - there's no contest here. Now AIX vs. NT 3.51... that might be a contest.. -:)
    • Its your perspective ... not mine

      I'm a UNIX admin w/ HP-UX, AIX, and Solaris in house. All in all I like them all. In our Datacenter, we have had better luck w/ AIX and HP-UX than Sun.

      This year alone we have 4 sun boxes go down to one on AIX and none on HP-UX (shouldn't really count the HP-UX boxes though, we have been replacing our HP-UX boxes w/ HP intel boxes running Linux).

      Our ratio consist of 50% Sun, 45% AIX, and 5% HP-UX. (Not counting Linux since this is comparing the big three UNIX vendors) Two times Sun box went down because of hardware (E4500 and V440) and the other two times was a kernel crash (4800 running sol 9 and a 280R running sol 8). The AIX box than went down was because of hardware (7013-J50 running AIX 4.2.1, I know, its old and it will be replaced next year w/ a p550).
      Whatever UNIX you have, everyone is going to have a different perspective. But be happy, you are running UNIX no matter who that vendor is and that is better than some alternatives.
  • You're missing the point...

    I think you are mistaken in your analysis of our survey project. The objective of this project is to examine the degree of loyalty customers have to their major UNIX vendor and to see how the committed customers of a particular brand rate all of the major vendors on particular criteria. In other words, we are trying to gauge the health of each vendors? installed base ? Are they happy with their main UNIX supplier? Are they getting the best technology? Do they believe another vendor has better products? etc., etc..

    You specifically attack our use of what we call a VPI score (Vendor Preference Index score) to show how survey respondents voted on particular criteria. You seem to believe that the raw numbers would be much more representative of customer?s perceptions and beliefs. In your example, you put together a table of how you believe the scoring puts Sun Microsystems at a disadvantage. Your example, however, proves my point about how this survey is aimed at quantifying the loyalty of the installed bases ? not the sheer numbers of votes for a particular vendor. In your example it was clear to see that Sun?s installed base voted for Sun at a considerably lower rate than the installed bases of HP and IBM ? this is the whole point of the survey and was clearly stated throughout our survey report.

    While you say that (in your example) a score of .85 for Sun doesn?t mean that the 15% of Sun voters voted for someone else (IBM or HP), you are completely wrong, the Sun ?voters? defected to HP and IBM on that particular issue. You explain away this obvious fact by saying:

    ?The 0.85 VPI score calculated above, for example, does not mean that 15% of Sun's users voted for someone else; in fact, it's not possible to ascribe a precise meaning to it at all.?

    Your assertion above is dead wrong, the .85 VPI means precisely that 15% of the Sun installed base survey participants voted for someone other than Sun. They may have voted for IBM, or HP, or even selected a ?not sure? selection (depending on the actual question). But it does mean that those 15% did not vote for Sun.

    You seem to believe that simply the largest number of votes should ?win? a given category. Given that Sun is so dominant in the UNIX market and that most Sun voters will agree that they have made the best choice in UNIX servers ? Sun would obviously win every category. Your premise that survey participants will spread their votes around "to be fair" is true only in the sense that it applies to all survey participants - not just those who have standardized on Sun. The pattern of responses shows that the vast majority of survey respondents gave credit where it was due - regardless of the prevailing UNIX in their own organization. To assume, as you do, that only Sun proponents will throw a vote towards IBM or HP "just to be fair" is just silly.

    To show the dominance of Sun in the data center, on this survey, 50% of respondents said they have standardized on Sun, 29% on IBM, and 21% on Hewlett Packard. Given that there were 197 respondents, then 99 respondents are from Sun shops, 57 from IBM shops, and 41 from HP shops. If I just used raw ?votes?, as you are suggesting, then the results would probably be skewed in Sun?s favor just by virtue of the sheer numbers of Sun ?standardizer? respondents. For example, using your own table...

    col 1 col 2 VPI
    Sun 140 120 0.85
    HP 40 44 1.10
    IBM 17 33 1.94

    Col 1 = number of respondents selecting the vendor as their primary UNIX. Col 2 = number of respondents voting for that vendor as the leader in this category. VPI score = number of votes divided by size of respondent pool. In the above example (which is skewed much more than my actual survey results), if as many as half (or more!) of Sun's installed base voters decided that HP and/or IBM were leading Sun in a particular criteria - Sun would still win the category! They win even though half of their installed base believes Sun is trailing their competitors and thus voted for HP/IBM.

    Since the point of this survey is to discover how the installed bases of UNIX vendors perceive their own vendors vs. competitors - the VPI measure is a solid indicator of what they think. Under your "winner takes all" methodology, the vendor with the largest number of survey respondents is almost assured of winning on all criteria.

    Finally, your write:

    "In other words, this is actually a very cool piece of work - but readers should understand that the larger the margin this report awards IBM on any one question, the greater the margin by which Sun actually won it."

    I appreciate your comment (compliment?) that this is a pretty cool piece of work. However the rest of your sentence is completely wrong and misleading. You have to keep in mind that the VPI scores are simply a measure of how the installed base of a particular vendor voted. If the installed base of Sun voted for IBM or HP, it will be reflected in the VPI score. Just as it is when IBM or HP voters select Sun as #1.

    In closing, I hope the above email helps you understand the goals of the survey. I obviously believe my methodology is solid and provides useful information. I have been forthright and clear about the survey methodology and scoring. It is your right to disagree with the methodology, results, or both, but your feelings and beliefs do not change the data the survey collected or the results.

    Dan Olds
    Gabriel Consulting Group, Inc
    • Lumping together

      Hi Dan

      I think that a skew arises because the VPI figure lumps together "loyalty to primary vendor" with "opinion of other major vendors". VPI biases towards the smaller vendors because a) there is a larger sample size from which "defection" is possible and b) each defection counts for comparatively more.

      You could avoid any chance of skew by using real percentages.

      For example, an imaginary question:

      1. Who makes the best operating system today?

      X% of "primarily Sun" customers said Sun.
      Y% of "primarily HP" customers said HP.
      Z% of "primarily IBM" customers said IBM.

      Now you can directly compare X, Y, and Z to get an indication of brand loyalty on the OS question.

      Tom Shaw
      (Sun employee; I speak for myself, not my employer)
  • AIX beats

    In the OS world, you can get anything you want (no Alice's Restaurant). Solaris is like a giant erector set. You get all sorts of (well-built)pieces and parts - MORE than anyone else! HP-UX gives you less pieces, but they all fit together easily and work together better. AIX has new and shiny parts, that LOOK like they fit together BEFORE you buy them - but when you look at them, you have trouble, so you have to have someone else come over and show you how to do it (the first person to come over may not know themselves how to do it). After lots of time and effort, your pieces fit together and look a lot like the Solaris and HP-UX stuff (just for more money).
    Roger Ramjet
  • Wrong Conclusion...

    This survey results looks vaguely familiar - I believe I voted!

    In my shop, we have been standardizing on SUN... and I have to
    tell you... I was EXTREMELY hard on them in recent surveys.

    The reason I was extremely hard on them is because our
    applications require 100% uptime... not a single glitch.

    At the same time, I voted "my perception" of the other vendors
    (from previous experience and features observed in trade
    journals.) I was very gracious to them, as the writer indicated - I
    was trying to be fair.

    Don't get me wrong - I would never switch from SUN unless
    someone else produced equivalent rocket science. The future is:
    1) An Open OS (not just Open Source, although this is a good
    2) Virtualization without the overhead of running multiple
    operating systems instances.
    3) The 8 cores with 32 hardware threads (and more in the
    future) on a socket.
    4) A real roadmap 3 years out while delivering [early] (like SUN,
    recently with T1/Niagra)
    5) Produce/Release tools (Office, Middleware, Java, Developer
    Kits, etc.) for all OS's, to offer future migration & integration
    options (unlike Microsoft, HP, IBM.)
    6) Best in class 3rd party application debugging & tuning tools

    SUN is so strong on the rocket science, they can afford to be
    average (as good as their peers) in the market. I need for them
    to be PERFECT, however.

    All of my applications run under Solaris, AIX, and HP-UX - but
    those other platforms are risky (from an application perspective)
    because the installed base is so low.

    When my application vendors indicate 45%/45%/10% Windows/
    Solaris/Other - I would be a fool to move from Solaris when
    looking for 100% availability. More bugs are found (and fixed) in
    the higher volume Solaris, more bugs are noticed with reboots in
    Windows, and while the other OS's are maintained because of
    government contracts.

    The headline is completely uncalled for - I would have
    terminated the story headline with a "?" to raise interest instead
    of a "!" to indicate a clear cut result.

    Questions concernining whether migration from a vendor to
    another and whether the surveyer agreed or not should have
    weighted the results of VPI.

    If results are going to be highlighted as a popularity contest, I
    will no longer vote my conscience in future surveys, and just
    stick 10's and 1's out there. I will give the analysts what they
    want to see - clear cut decisisions and just leave them out of the
    reasoning process.
  • AIX vs Solaris pSeries vs Sun BMW vs Mercedz

    It all comes down to personal choice. What is important is finding the proper hardware to run the applications. Each OS has its pro's and con's. Finding hardware that won't break your IT budget is the trick. Pre owned and even new end of line hardware is the ticket. Savings of up to 95% off list can be realized with second generation or older equipmnet. The latest and greatest hardware can be found on the used market usually at the 50-70% off list. From desktop units for home (a great way to brush up on you OS skills) to full data centers. Companies like ELARASYS INC. can save you thousands to millions over time. Systems - Memory - CPUs - Tape - Connectivity
    All available for next day deliver all over the world. Fortune 500 Referrences along with warranties and guarantees make it a no brainer to go used for some of your IT needs. It's like potato chips. Once you try used you will go back for more.
    Trading in old equipment for newer equipment also helps IT departments get around some red tape. can help you brainstorm many ideas on how to make your IT dollars go beyond the ghz IBM pSeries RS6000 xSeries FastT SUN CISCO HP