Last week I got into a talkback discussion with frequent contributor "bportlock" first over a throw away comment about the PC having been substandard at its introduction and being substandard now - and, then, more interestingly, about ways of ranking operating systems.
The PC thing I'm not concerned about: basically Intel is so far behind both PPC and SPARC right now that comparisons are difficult and, back in 1981 of course, IBM crippled the thing by choosing Intel's 8088 - a downgraded, 8bit compatible, rendition of the 16bit i8086 that wasn't selling against faster, more integrated products like those from Motorola and Zilog.
The OS challenge, however, is far more interesting: how do we know that one OS is better than another?
I've thought about this for several days now, and my answer is that better in an OS context is like pornography: I know it when I see it, but can't justify an objective standard.
There's some obvious stuff: that "better", for example, has to be defined with respect to some set of purposes at some specific time. Thus, right now I think it's safe to suggest that Solaris is a pretty good OS but makes less sense than JavaOS for a cell phone but that those values may switch next year - after all, what makes the iphone a pocket Mac is its ability to run pretty much the whole MacOS X - and apparently on PPC again too (or, at least, certainly not on Intel)!
But none of this leads to clarity. Suppose, for example, that we tried to address this by setting up a range of generic usage contexts, including temporal, hardware, and management expectations, and then ranked OS candidates according to their suitability for use within each such context?
"Suitability for use" would then be measured in terms of reliability, functionality, and total cost over time.
Nebulous terms like "reliability" would have to be assigned a meaning by a measure like points assigned one per minute for each minute each OS "feature" considered relevant to the application is available, without compromise, on average across at least a few hundred physically different machines in actual data center use, between boot-up and first failure.
Supposing we did that, and were actually able to collect the data, we'd have one more big problem: weighting. Parameters like system reliability are not absolutes: you can't say "the system has to be reliable before I'll consider it" without making an assumption that you'll know "sufficiently reliable" when you see it - and the same thing goes for "good enough" and "cheap enough" too.
In other words what we've really got is a two hoop gate for each OS on each major parameter: it has to be reliable (effective, cheap) enough for use to be considered at all, and improvements beyond those minimums will influence the ranking according to the weighting scheme used.
Personally, for example, I value reliability much more highly than cheapness - and would weight our three factors as something like 0.6 for reliability, 0.35 for functionality, and only 0.05 for cost.
One of the glaring issues here is that all of this has to be assessed relative to what's available at the time - and that's where I differed most with bportlock. His defence of SuSe 10 as better than SuSe 7.1 is, I think, perfectly reasonable because 10 is more expensive than 7.1 was, but would rank higher on any reasonable measure for both reliability and functionality.
Add temporal relativity, however, and things get more complicated: SuSe 7.1 broke new ground, established new reliability, cost, and functionality thresholds for Linux, and was generally competitive for x86 use both with the BSDs and with Solaris.
But that was then, and this is now: and SuSe 10 isn't in the game with Solaris: Yes, it meets the threshold conditions for consideration and yes it's better than 7.1 - but it costs more than Solaris, doesn't have the track record, and lacks functionality like coolthreads support, dtrace, ZFS, and the failure management framework.
By my criteria, and using my weighting, prefering Solaris over SuSe 10 makes sense today: but there's nothing necessarily right or universal about any of this; all I'm really sure of is that SuSe has lost ground relative to Solaris and the BSDs since 7.1 - and that thought brings us to the elephant in the room: Microsoft's Windows OSes.
The problem is simply this: evaluate any Windows OS against Linux, BSD (including MacOS X), or Solaris on any combination of cost, functionality, and reliability and you end up predicting a zero Microsoft market share.
So clearly there's a missing factor - and the simplistic label for it is probably something like "acceptance" or "market momentum." When SuSe 7.1 came out Linux had media puff and market momentum going for it -this was to be the first of a new generation that would take over the world's desktops and lots of people, including me, were excited about this. Today SuSe is showing the Novell effect: no Novell acquisition has lasted 10 years, and their current dalliances with Microsoft and IBM suggest that SuSe won't be an exception.
Today the market momentum is all with Solaris - the BSDs having continued to improve largely unaffected by the brouhaha around them but Solaris 10 established the foundation for a new OS generation, and the current GPL3 mess underlines the value of Sun's CDDL.
So what's the bottom line? There are two of them: first any ranking of multiple OSes is going to have significant subjective components, and secondly that I think it's fair to say that SUse 7.1 was at the top of its class but that Solaris 10 holds that position today.