Solaris was always meant for high-end servers, that is to be expected. However, Windows 2003 server often sits on top of the TPC benchmarks well above Solaris. Go figure.
The transactions processing council maintains a rich variety of benchmarks. Taken literally, the comment is not true of any of them, because there are no recent Sun entries for the only test in which Microsoft's results are competitive: the TPC/C order processing benchmark.
In the more complex TPC/H, for example, the SPARC/Solaris combination holds the top spot by price/performance in the 100GB and 300GB ranges, comes second to an IBM machine running SuSe in the 1000GB benchmark, and comes third to IBM (SuSe) and HP (Red Hat) on the 3000GB test.
There are no current Sun results shown for TPC/C because Sun refuses to participate, claiming that the simplified transaction structure in the benchmark is easily gamed by vendors and does not occur in real life. Here's how they put it in a 2003 reality check
With any stagnant simple workload that exercises only a portion of database and operating system capabilities, it can be expected that many software optimisations will evolve to take advantage. The quarter-mile drag strip is certainly a test of automobile performance, but vehicles optimised to excel on this very limited course are not at all capable of meeting the widely varied needs that are fulfilled by a general purpose car or truck. Just as top fuel dragsters are not the right choice for a family car, customers often find that vendor configurations of record-breaking TPC-C systems are not appropriate for their data center.
IBM does play in the TPC/C game and just announced a score of 3,210,540 on a 64-processor IBM eServer p5 595 running AIX with DB2. This was a $29 million dollar installation rated at $5.19 per transaction courtesy of a 47% discount.
In comparison, the system with the lowest cost per transaction recorded is a Dell PowerEdge 2800/1/3.4GHz/2M running Microsoft SQL Server 2000 Workgroup Edition under Microsoft Windows Server 2003 Server. That machine only needed 2.5GB of RAM and a single 3.4Ghz Xeon at a system cost of $39,340 to achieve a score of 28,122 and thus a unit cost of $1.40.
(There's an untold story here, by the way: all the other hardware vendors are reporting their results using Linux. Put Debian with mySQL on this machine, and you'd probably break a $1.20 -- and an Apple X-serve using lots of RAM could probably break one dollar because its X-RAID array is so cheap.)
Nevertheless, these numbers look convincing -- IBM is faster, and Wintel is cheaper, just like the man said.
Except that he's begging the question: proving that Sun's can't compete on TPC/C by assuming they're lying about their reasons for not participating in order to cover up their inability to compete.
So where's the evidence they're lying? Sun does participate in a lot industry benchmarking activity -- in fact, they've just posted a long list of wins for Solaris 10 on both SPARC and AMD hardware. So what do these benchmarks -- things like like enterprise ERP/SCM-based applications processing, business intelligence processing, or integrated e-commerce work -- have in common? They're at the other end of the complexity scale from TPC/C.
Basically, Sun's actions on benchmarking are consistent with their comments about TPC/C and supported by the results they get on tests like the Informatica data transformation application. The way I figure it, that means they're not lying -- and therefore that the fellow who sent me that email will be falling on his sword if he ever discovers the courage to look beyond his own assumptions and beliefs.