Selling Unix in 2009 - much easier than in 2008?

Will selling Unix be easier in 2009 than it was in 2008? The opportunities are there - depressions provide the financial presures that move people to see better, cheaper, and faster solutions as lower risk than complex and expensive ones.
Written by Paul Murphy, Contributor

If you're selling Unix related products to mid range businesses you usually face enormous resistance from those wedded to the status quo - which today typically means a predominantly Windows infrastructure and staffing with an older AS/400 or HP-UX box in the background doing much of the "heavy lifting" on the manufacturing and inventory management sides of the business.

Now, in business school they would have taught you that the decision makers you deal with should ignore sunk costs, including those embedded in their own infrastructure, to consider only the future cost of current decisions; but that's not reality. In reality, customer IT management will usually do just about anything to avoid change - including bankrupting their employers if that's what it takes to avoid admitting that their previous decisions might have been wrong.

There are, however, three good reasons to believe that a much larger percentage of these guys are going to be amenable to change in 2009 than in 2008:

  • a significant number of them have become sufficiently unhappy about the Microsoft upgrade cycle to have avoided upgrading to Vista. These people are therefore not only predisposed to doubt Microsoft, but in many cases rapidly approaching (or even past) the economic end of life for the gear they have - and are therefore facing a choice between going to senior management for the money to play catch up ball around Windows 7, or claiming that their earlier decisions were based on the expectation that technical change would yield alternatives (like yours) they're now free to pursue.

  • the world has entered a deep recession that seems certain, absent an improbable outbreak of good sense in Washington, to out last much of the infrastructure the guys who thought they could skip a Wintel generation have in place - and which they suddenly can't afford to rehabilitate on the Wintel model.

  • on a personal basis a key driver for the getting along by going along business was that even if your complaisance bankrupted your employer, other people placing the same bets would see you as a safe hire - but during a recession those alternative jobs may not be there and many of the IT decision makers involved will therefore find their own interests much more closely aligned with those of their employers than previously.

The market for cost effective solutions is, in other words, as close to wide open as it has ever been.

The question, of course, is how cost effective is cost effective enough to matter to enough decision makers? That's going to depend on the people and the pressures they're under - but two examples, one each from the Cell and SPARC worlds, will suggest the range of server side savings available.

First, here's a bit from Wired Magazine's coverage of Dr. Gaurav Khanna's eight PS3 "super-computer":

"The interest in the PS3 really was for two main reasons," explains Khanna, an assistant professor at the University of Massachusetts, Dartmouth who specializes in computational astrophysics. "One of those is that Sony did this remarkable thing of making the PS3 an open platform, so you can in fact run Linux on it and it doesn't control what you do."

He also says that the console's Cell processor, co-developed by Sony, IBM and Toshiba, can deliver massive amounts of power, comparable even to that of a supercomputer -- if you know how to optimize code and have a few extra consoles lying around that you can string together.

"The PS3/Linux combination offers a very attractive cost-performance solution whether the PS3s are distributed (like Sony and Stanford's Folding@home initiative) or clustered together (like Khanna's), says Sony's senior development manager of research and development, Noam Rimon.

According to Rimon, the Cell processor was designed as a parallel processing device, so he's not all that surprised the research community has embraced it. "It has a general purpose processor, as well as eight additional processing cores, each of which has two processing pipelines and can process multiple numbers, all at the same time," Rimon says.

This is precisely what Khanna needed. Prior to obtaining his PS3s, Khanna relied on grants from the National Science Foundation (NSF) to use various supercomputing sites spread across the United States "Typically I'd use a couple hundred processors -- going up to 500 -- to do these same types of things."

However, each of those supercomputer runs cost Khanna as much as $5,000 in grant money. Eight 60 GB PS3s would cost just $3,200, by contrast, but Khanna figured he would have a hard time convincing the NSF to give him a grant to buy game consoles, even if the overall price tag was lower. So after tweaking his code this past summer so that it could take advantage of the Cell's unique architecture, Khanna set about petitioning Sony for some help in the form of free PS3s.

"Once I was able to get to the point that I had this kind of performance from a single PS3, I think that's when Sony started paying attention," Khanna says of his optimized code.

Khanna says that his gravity grid has been up and running for a little over a month now and that, crudely speaking, his eight consoles are equal to about 200 of the supercomputing nodes he used to rely on.

Those 200 super computing modes were, of course, mostly dual core Opterons and Xeons; meaning that a single, first generation cell processor outperforms its x86 competitors in super-computing by about 25 times -and bear in mind too that it costs less, inclusive of 512MB of RAM and the 60GB drive, than a single mid range x86 CPU.

Since that's an across the board advantage on both cost and power - and because most of the software available for the engineering and simulation work characteristic of non academic super computer use is available for Linux - any HPC manager facing cost limitations over the next year would be utterly insane not to closely investigate both IBM's cell blade offerings and the roll your own option Khanna et al illustrate.

My second example is also from an out dated report - in this case from Sun's bmseer blog: (Note: I've rotated his table 90 degrees to fit here.)

This following test was run as a customer request to do a fair and complete comparison to judge the effect of a possible upgrade. They upgraded!

Only 24 Sun SPARC Enterprise T5220's are required to consolidate the same workload that required 251 Dell 2950 servers running Linux. Sun's CMT solution required 10.5x times fewer servers.

CMT servers can easily consolidate many MySQL instances into a single server running the Solaris Operating System. No additional virtualization software was needed for this consolidation.

The Dell 2950 solution requires 10.5x more rack units than the Sun SPARC Enterprise T5220.

The Sun SPARC Enterprise T5220 also uses 8.4x less power than the Dell 2950 solution, which amounts to a yearly savings of $115,000 in electrical costs (assume $0.13/kWh).

Both configuration solutions produced the same level of performance and response time.

Customers are interested in consolidating workloads that were originally created on X64 platforms. This customer workload which consisted of a heavy MySQL database & light-weight Java application was used to compare a Dell X64-based solution to Sun's CMT-based servers.

Performance Landscape as of 10/22/2008.

Systems required to reach same performance level with same response time characteristics.

  Dell 2950 Sun 5220 Sun Advantage
MySQL Instances 700 700  
# Servers 251 24 10.5 times
Total RUs 502 48 10.5 times
Total Watts 114,707 13,680 8.4 times
Sqft Needed (200w / sqft) 574 68 8.4 times
Annual Power Cost @.13/kWh $130,638 $15,579 8.4 times

This benchmark is based on actual customer workload. Each server configuration was driven to meet identical use, throughput, and response time characteristic.

Benchmark Description

The test simulated real-world requirements of a large organization's use of hundreds of MySQL instances. For this workload the customer solution is architected to handle query distribution at the application layer. Up to 4GB database per instance are used in the 700 MySQL instances.

Two notes about this:

  1. I think this report has a minor mistake: the software wasn't developed on x64, it came from the 32+4 bit pre x64 world - that's why they have 700 MySQL instances and why the test was run using an obsolete Java run-time.

  2. Those 251 Dell 2950s have dual core 2.33Ghz "woodcrest" CPUs - the same ones, running almost the same workload, about which one Bryan Richard assured the Linux Magazine faithful that:

    [Intel's] Sudip does a excellent job of responding when someone asks him to compare Intel's quad-cores with SUN's UltraSPARC processors with CoolThreads technology, 8 cores, and 32 accessible threads on a single chip.

    b. Additionally, if the application is not very multi-threaded e.g., some batch jobs or optimizer solvers, then the Sun solution is simply not competitive as its cores are very simple and much lower performance on an individual core basis as compared with the Intel Clovertown Core 2 micro-architecture based cores.

    That, it turns out, is pretty easy to verify. AnandTech put Intel Duo Woodcrest Xeons up against SUN's 8-core UltraSPARCs back in June and Intel whipped SUN handily in Apache/PHP/MySQL processing, Java webserving, and every other category where they compared the two processors.

    Now notice, please, that my point here isn't that he's absurdly wrong, it's that there's an entire x86 sales industry devoted to making you believe stuff like this. What's important about it, in other words, is that if you think bmseer's numbers are wildly over blown it's not because you're an idiot, it's because a lot of people have put a lot of thought and effort into lying to you on the subject.

What the examples show, I think, is two relative extremes pitting Cell and SPARC/CMT doing the jobs they're designed for against general purpose Xeons to obtain better than order of magnitude savings - savings that cannot, I think, be ignored in a time of serious financial stress and uncertainty.

Thus the bottom line on these two examples in terms of Unix sales for 2009 may simply be that customers who can be shown that these extremes meet in the relative middle defined by their applications mix, will have little choice but to buy the product.

Editorial standards