Mainframe Linux

Mainframe Linux

Summary: The rationale for mainframe is entirely based on managing to a metric whose operation produces exactly the opposite of what users want.

Here's a bit from the introduction to a recent article by Ken Milberg under the title "Mainframe Linux vs. Unix".
Today's new breed of smaller, cheaper mainframes, paired with the Linux operating system, look like an attractive alternative to Unix on RISC or SPARC servers. Linux on the mainframe seems to give us the best of all worlds: the dependability and resilience of over 40 years of hardware innovation and a flexible, reliable open source operating system. The big question: When should companies choose Linux mainframes over Unix? This article compares the features and performance of Linux on the mainframe -- in this case, the IBM System z Server -- and compares it with Unix, in terms of its availability, features and performance. From a performance standpoint, the mainframe has a number of characteristics that are not as prevalent for its mid-range (Unix) brethren. They include: Dependable single-thread performance. This is essential for optimum performance and operations against a database. Maximum I/O connectivity. Mainframes excel at providing for huge disk farms. Maximum I/O bandwidth. Essentially, connections between drives and processors have few choke-points. Reliability. Mainframes allow for "graceful degradation" and service while the system is actually running. We've discussed some of the benefits of the mainframe, but why Linux? Standardization Many companies are already running Linux on distributed platforms. For those that already do, in addition to having IBM mainframes running centralized applications, using Linux on the mainframe becomes a natural evolutionary step for their business' mission-critical applications. Virtually any application that runs Linux on Wintel computers will run on System z, with only a simple recompile. This solution provides the organization with a corporate-wide Linux adoption policy. Consolidation Many distributed Unix and/or Linux servers can be consolidated onto one System z machine, which leads to substantial cost savings. For example, if a company has a server farm of 200 distributed servers, it can be easily be consolidated into either one or two System boxes, hosting 60-70 Linux servers in a high-availability environment that can scale.
Notice that "decentralized" means "not on the mainframe", that the comment about predictable single thread processing falsely implies that Unix can't do this, that the I/O capability actually consists of the ability to connect many ESCON devices -small, SCSI-1 era, drives that are individually slow and limited - and that the reliability claim applies to zVM running on the hardware, not the Linux kernel running in the VM instance. Lets leave that, however, to consider these extracts from an IBM puff piece headed CALCULO boosts DB2 performance with Linux on System z, celebrating the successful conversion of a DB2 application from zOS to zVM/Linux:
CALCULO S.A. is an IT services company based in Madrid, Spain. As a leading provider of solutions and outsourcing services to the insurance sector, CALCULO relies on continual investment in research and development to keep itself at the forefront of the industry Business need: Constraints on database size with its existing DB2 for z/VM platform meant that CALCULO was running out of data capacity, and would soon be unable to deal with business growth. CALCULO wanted a way to improve database performance and increase capacity without moving away from the highly secure and reliable IBM mainframe platform. Solution: CALCULO challenged IBM to provide a proof-of-concept for running the DB2 database under Linux on the System z platform. Following a successful project, CALCULO implemented an IBM z890 mainframe with an Integrated Facility for Linux engine, and set up two z/VM virtual machines? one to run the company?s core business application under VM, the other to run DB2 under Linux. Benefits: Maximum database capacity is considerably increased, eliminating the restrictions on business growth; 90 per cent improvement in database loading times; 80 per cent speed increase in restoring from backups; speed of extraction, indexing and calculation increased by 75 per cent; performance improvements should significantly reduce offline time, increasing productivity.
Since both sources argue that mainframe Linux offers tremendous operational savings by combining open source with legendary mainframe reliability and performance, the obvious question is to what extent, if any, we can apply these conclusions to our own decision making. On that, lets start with another bit from the Calculo piece - this one revealing something about the configurations used:
IBM gave CALCULO two options ? moving to DB2 for z/OS or running DB2 under Linux on an Integrated Facility for Linux (IFL) engine in a new IBM z890 mainframe, replacing two older Multiprise 3000 H30 servers. ?It used to take between eight and ten hours to reorganise the huge tables in our DB2 database ? during which time, the system was effectively offline and nobody could do any work,? says Raul Barón. ?With the z890, it only takes about one hour, which is by comparison a negligible interruption. ?Similarly, although backups only took a couple of hours, restoring the data took ten hours, which was a drain on productivity. The new system should be able to cut this to just over two hours.?
These stories are intended to make Linux on zVM sound impressive - but both rely on a critical assumption: specifically that the reader knows little or nothing about relative system costs and performance. In the Calculo case, for example, the gain from eight to ten hours on the Multiprise to only about an hour on the z890 works well with Baron's reference to "huge tables" to give the impression that the conversion demonstrated impressive gains on a big job - but that only happens because most of us don't know anything about the machines involved. The Multiprise H30 was a mini-390 maxing out at 60 IBM "MIPS" on ESCON (roughly comparable to SCSI) controllers and 1GB of RAM - i.e. it offered roughly the performance of a 200Mhz Dell Pentium Pro. The end point is five years better: a 2004 z890 would max out at four processors offering up to about 300 IBM "MIPS" each (Power4 generation at 600 or 750Mhz), 8GB of memory, and FICON connectors - offering about the same throughput as a four way 1.4Ghz Dell Xeon with a first generation fiber channel controller. This may seem exaggerated - equating a mainframe to an older Dell? - but check out a 2003 IBM Redbook on Performance Tuning Linux for the zSeries in which the authors show that it's actually possible to get your Linux page swap rate on a dedicated 600Mhx zSeries "engine" all the way up to 40MBS - to a RAM disk! - and then provide a lot of guidance on getting your page read rate - on eight striped disks hooked up via four fiber channels!- up to a whopping 120MB/S. That's barely Pentium II/SCSI performance and pathetic enough, but then consider the dollars. According to the tech news site the Multiprise started at about $135,000 U.S. plus about $1,080 per month in maintenance; the z890 cost ranges from about $240,000 U.S. (plus $1,500/mth for maintenance) for the base model to $1.6 million (plus $20,000/mth) for the top end version. Costs for Red Hat Linux for the mainframe IFL were, I believe, around $40,000 per license per year when the conversion started - SuSe has consistently been cheaper and now runs around $12,000 per IFL per year. So what did they get for the money? Not performance - a few thousand bucks for a dual 3.2Ghz Xeon with 16GB and a 4GB FC controller would more than double the z890 on throughput. And not reliability either: the z890's RAS features aren't accessible from Linux - meaning that failures cause a Linux reboot even if the underlying hardware continues to operate. But if the example IBM uses on its brag site for zVM/Linux is somewhat questionable, what about the two arguments Milberg makes in his article? First, he says that the mainframe is fast, reliable and easy to use, and then that Linux offers standardization and consolidation opportunities on the mainframe. Fast it's not - and the mainframe is reliable but Linux instances on the mainframe tend to require a lot of restarts largely because of the unique hardware environment, the high-end/low end swap during re-compile, and the fact that the mainframer's every administrative instinct mitigates against managing Linux effectively. "Ease of use" is, of course, in the eye of the beholder - but I personally doubt that you can find an actual zVM user with less than ten years of "progressively more senior" commitment to it who agrees that it's easy to use. Worse, his standardization claim is expressed in this statement: "virtually any application that runs Linux on Wintel computers will run on System z, with only a simple recompile" and that's simply not true. Spend some time reviewing the thousands of user questions on the Maris zVM/Linux mutual support site and you'll be struck both by the appalling lack of knowledge demonstrated by many of the questioners and by the obvious complexity of the problems they face making Linux work within the zSeries environment. Consider, for example, this (more or less randomly selected) exchange between an apparent beginner and an accepted expert:
> So, if I'm understanding this correctly, taking a backup of a running Linux system from another LPAR gives you, at best, an unreliable backup.
That's certainly how I read it.
> That means that there are only two viable alternatives:
> Shut down Linux and do the backup from another LPAR or,
Yes. The plus is that you can then restore your Linux environment the same way that you restore the z/OS or z/VM environment. Also, you can manage your tapes using your standard tape management software (which doesn't exist at all on Linux, as I understand it). The minus is unavailability of the Linux system during this time (which is shorted by some sort of "snap shot", if you have that capability) and well as it being an "all or nothing" DASD level backup / restore, which is not useful for restoring individual files.
> Use a backup client that runs within Linux and therefore participates in its file system processing, getting all the current and correct data for the backup.
Correct. But, again, Linux does not interface to the "normal" tape management systems used by other System z operating systems.
> Is that about it?
> The problem, as I see it, with backing up from another LPAR is that there is no incremental or differential backup capability. Nor is there any selective restore capability. Its an all-or-nothing backup/restore.
His second argument is that the mainframe lets you consolidate easily. Here's a repeat of what he says:
Many distributed Unix and/or Linux servers can be consolidated onto one System z machine, which leads to substantial cost savings. For example, if a company has a server farm of 200 distributed servers, it can be easily be consolidated into either one or two System boxes, hosting 60-70 Linux servers in a high-availability environment that can scale.
Look past the internal contradictions (200 distributed servers in one place can be consolidated to two system boxes emulating 60 to 70 servers each) and he entirely misses the practical problem: the cumulative network bandwidth on those 200 servers, even if they're a few years old, exceeds the maximum configurable bandwidth on IBM's biggest mainframe, the 2094-754, by at least an order of magnitude. In other words, even if average utilization is sufficiently low to allow the zSeries machine to handle the load (i.e. under 1% per x86 server replaced) users will be queued up for network access nearly all of the time. What's really going on with mainframe Linux illustrates a big part of the problem with data processing. When data processing started in the 1920s its machines were expensive, people were cheap, and all processing was done well after the transactions recorded were completed - and all of that continued when data processing made the jump to digital card imagery and COBOL processing in the 1960s. As a result the key metric evolved in the 1920s and 30s, utilization, continued to drive management decision making - and the argument today is just what it was then: if you're going to spend $22 million on an IBM 2094 tabulator, you'd better run the thing flat out 24 x 7. Interactive users didn't exist when this evolved but we do have them today - and what they want is immediate response: meaning lots of resources available on demand. To give users the fastest possible response when they want it, you have to configure your gear so that there is a very high probability that the resources needed are available when the user needs them - meaning that the smaller the machine, the lower its average utilization has to be for you to meet your user mandate. And that's the obvious bottom line on mainframe Linux: the rationale for it is entirely based on managing to a metric whose operation produces exactly the opposite of what users want.

Topics: Linux, Hardware, IBM, Open Source, Servers

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • The perfect environment for Outsdanding Linux.

    Excellent article Murph, sometimes we need to uncover some very unknown idiotic comments from big corp clueless about computing.

    But unfortunately your scenario is exactly the one where a Linux Environment is absolutely outstanding and unbeatable.
    That is the right place to displace mainframes and replace old gear by X86_64 Linux.

    The point is Murph that the path of systems evolution is clearly favoring Linux.
    Not mainframes.

    Todays unbeatable performance, stability, safety and reliability systems are built around Linux.
    Dirt cheap, fast, reliable.
    How? Simple:

    Fortunately you made a point about this data processing application scenario.
    In practice today a single Rack with 84 Servers (yes each 1U for application server is a Dual Quad Core system with two identical Motherboards) can have up to :

    84 Serves, = 168 Cores (One hundred and Sixty eight Cores) Dual Quad Core 2GHz 1066 FSB, 8MB Cache.
    16GB / Node = 672GB ECC 667MHz RAM total (max 32GB/node total max of 1344TB! if needed ... )
    36.7GB /Node = 1.577 TB 10.000RPM total in the rack (SATA2 Drives ... I know they are not for DB and actually can up-date up to 1TB raid 1 per node for the same price! 42TB /rack ... all RAID 1 each node 4 hdd's )
    Each node cost about 5000US$ ... 2500US/Server !!! And what a server!
    ... that costs about 210.000$
    A complete Supercomputer for 210.000$ ...

    The trick is to use Software to make a system act like a distributed system: A Mega Mainframe for this particular scenario.
    Use a couple of servers to run only Apache 2 with mod_jk in a redundant configuration, acting as a mod_jk load distributor. fro the user requests.
    We get all of our servers to receive a fair share of requests.

    For a database use a huge multiGigabit input with fiber channels disks ...
    Make the application speed oriented: And that means: The database acts as a database _only_ , meaning it only handles data, not a single calculation or complex storage procedures are handled.
    Let that small Supercomputer Datacenter in a single Rack do what it does best, take care of all the application.
    Yes I know ... DB folks will not like this scenario ... no sir!
    They do not trust programmers (I also must have an eye on the code all the time ... but also know stories about db's ... tss tss tss tss mistrust is mutual folks :) :) )

    With this type of solutions Murph ... call whatever mainframe you can get ... it will look slow to a crawl in comparison to this systems ...
    As always the trick is on the software ...

    A huge piece software in that system with a decent database solution can sustains Millions of requests per minute no problem!
    (As long as the database sustain the hit .. :) :) ...

    In an actual system even in a big Corporate ERP application integrated Logistics, CRM, financial part and so on, one does not even need all that much servers, only ten would more then sufficient for 10.000 -20.000 simultaneous sessions the users would have a instantaneous reply to every single request.

    Now, I wonder how much a mainframe system would cost to have this type of responsiveness, interactivity ... I wonder ... and also how would be the price form maintenance, licensing, operation ... and last but not the least, programming costs ....

    • Please stay tuned

      THere' more coming on costs. Meanwhile..

      1 - you say "But unfortunately your scenario is exactly the one where a Linux Environment is absolutely outstanding and unbeatable.
      That is the right place to displace mainframes and replace old gear by X86_64 Linux."

      What's unfortunate about this? It's obvious these guys should switch - perhaps what's unfortunate is that they don't get it at all?

      2 - the 2094-754 is a 54 "engine" (64 CPU - 54 of them normally accessible) PPC machine. it costs $22.5 milion or so at list, before non OS licensing, external storage, and tape automation. 54 Dual CPU (4 core) AMDs in a rack with a couple of backend thumpers would out perform it by a considerable margin -for well under $500,000 inclusive or around 2.5% of its cost.
      • Oh ... Ok.

        I meant unfortunately for the mainframes, not for Linux ... that is the ideal place to profit ...

        And the cost difference is really huge.
        No wonder that Red Had/Novell can charge heavily over maintenance and support for each server...
        We wait for more on costs ... I am sure they will always be raising. And that (speechless !!! ) 22 Million price!!!
        Do they actually sell that stuff ???
        Wow! I am speaking to the wrong customers :)

  • You're like our local bikers.

    They're really nice people. Of course showing respect for others and their property is a good idea period, but I find myself thinking about some lectures I've gotten on the mechanics of bikes when I made a factual error in a casual conversation.

    Your mainframe bias has always been pretty clear: you're comfortable with them, you expect that any discussion of them will cover their advantages. And you expect articles to be accurate.

    This sort of article appears all the time. And the assumption you're making -- lack of understanding of relative costs -- makes perfect sense for a media outlet because people are being trained, hired and promoted all the time. It would exactly be nicer if there was more media discussion of relative course, and I know there is some, but this approach is cheaper and safe in terms of selling to the audience most likely to read these stories regularly.

    Yes, I'm a cynic. But that's also why I feel -- knowing other readers will disagree -- that Paul Murphy on Sun and Solaris is usually Paul Murphy at his best. I kind of wish you had talked a little more about how you would have implemented this using any Unix and what you would have expected to pay, without tieing it to Linux.

    Silly, I know, and you do do this sort of thing occasionally, usually with Sun. I'm in a position where Linux meets my needs better than anything else, and I get most help from you with Linux when you are talking about computer use, not about Operating Systems. You will usually phrase these in terms of Solaris or some other such system.

    I recently was invited to one night of an educational conference on Linux. While there someone told me that Ubuntu wasn't Linux, and several attendees reminded me of trekkies more than most Linux people do. All of them with Ubuntu regalia. I don't like Ubuntu, but I don't feel threatened by it. I do like Linux, but I feel threatened by the increasing amount of good marketing it's been getting which I think encourages uses for which it might not be well suited while discouraging users from exploring its strengths.

    The article sounds to me like the fruits of Canonical's campaign I saw. They are a business and entitled to make money. Any media outlet faces challenges regarding profitability, and I'm starting to read and hear essays about the widespread tendency to take them public in the seventies was probably a mistake (some by conservative economists).

    You're challenging marketspeak. The author is clearly trying to get information out in a manner where it will be useful to the most profitable demographic for his employer with the least amount of work. If you want to get even, which is better than getting mad, you might spend a few more moments talking about what is good about the alternatives, rather than about what's bad about Linux.

    The thing you miss about Linux is this: it's part of a set of tools which at this point can be adapted to run commercial and mission-sensitive applications on mainframes, AND can be adapted to run tools for media activists by an Italian Rastafarian living in Amsterdam (dyne:bolic). Both are equally legitimate uses for it and neither can be "better" than the other. From that perspective this story is hilarious. But you're a mainframe guy and would simply prefer to discuss how this author is not doing his job, without, say, inviting anyone with specific knowledge of the subject to contribute good Unix solutions. Which is a good use for a blog.
    • huh? what'd I miss here?

      If you think this was an attack on Linux, you should read it again. (please).

      As for Ubantu - that is Linux - a distribution focusing on the desktop.
      • That's not how I read it....

        ... I took his meaning as "Why are you wasting time on Linux? Why wasn't this called 'Mainframe Unix' or 'Mainframe Solaris'"?

        Just my 2p worth.
        • Close enough for a cigar.

          I'm not at my best today and cringed when I looked my comment over after pressing submit. My point is not that Murph is anti-Linux but that he gets mainframes and Solaris better than he gets Linux. Reading him about Linux is not a total wsste of time (though when I finally read his comments on the SCO decision I recycled one of my more obscure Groklaw troll jokes and said get a clue) but if he understood Linux better he would see that how good it is relative to the alternatives is not really important to what, having looked it over I will call Millberg's FUD, and I wish he'd given us a more positive answer to it.
          • Whoops!

            [i]"'m not at my best today and cringed when I looked my comment over after pressing submit."[/i]


            We all do those! I am absolutely superb at proof-reading my posts, finding no errors, posting it and then spotting all those bl**dy obvious errors!!!!!

            Oh for an "edit" button!!

          • Same here ...

            I find myself horrified with the mistakes I made ... I usually do not go that wrong typing ...
            I wish there was a way to edit the messages after submitting ...
            I think this as to do with the "adrenaline" pumped over the subjects discussed in this blog ... people get enthusiastic and writing takes a hit...


          • Me too

            Even when I do these from a real keyboard I make lots of errors - and often fail to see them before hitting "submit".

            Have I mentioned what happens when windows people run Linux servers? :)
        • Because IBM offers Linux under zVM, not Solaris

          -and there is a mainframe Unix - UTS -which is still used in a
          few places. (The MVS Unix faciity is not.. -: )
  • Is it possible?

    IBM has been known to win contracts based on low Linux costs, then increase the price when Linux proves inadequate.

    Could this be bait and switch in action?

    This is IBM; it's about the money.
    Anton Philidor
    • I wanted to be sympathetic

      but instead I'll be nativistic. You're like one of those foreign officials who are commented on in an American or Western Newspaper or Station with no government affiliation (but who may occasionally hire former government employees to contribute) and immediately hop on the line to one of our governments while touting to the world this is a national conspiracy with the puppets of market forces...

      Of course IBM does bait and switch. And I have problems with Milberg's article for a website whose editor once told me they had a copyreading position open in Massachusetts. And it's about the money. Search Enterprise Linux is not IBM. If Search Enterprise Linux is IBM then not only is PJ IBM, Bill Gates is IBM, Paul Murphy is IBM You are IBM -- and listen to the end of a Car Talk podcast before we say that I am IBM, please. They might heartily agree with any of those creative qualifications if applied to me.

      Milberg's article is not negative about Unix. I did call it FUD because I understand what Paul said about TCO and think it constitutes dishonesty. I don't see IBM behind the dishonesty though. The people I know who read these sites most regularly are the people most interested in getting the information they offer -- in this case newly-trained, newly-hired or newly promoted to a responsible position. By, for example, omitting a few trivial shared capabilities between UNIX and Linux, (which I feel he does to the extent that it affects his assessments, though he does a few other things to earn the dishonest tag) he's reinforcing the possible prejudices of people who would go to a site called Search Enterprise Linux (though to be fair they have run better and fairer articles in the past) and cuts down on the research he has to do to insure that every statement he makes is accurate. IBM has nothing to do with this hack job: Search Enterprise Linux is part of a group of websites devoted, essentially, to monetizing the spreading of information about several OSes. They are reasonably responsible, but in today's world, the bar for that is pretty low.
    • Get a grip...

      [i]"This is IBM; it's about the money."[/i]

      ... and M$ is not about money? They never bait and switch? If you want to see M$ in "low cost" mode, go look at Windows Starter Edition.

      C'mon Anton, you can do better than that!
      • Only one villain?

        Why should identifying IBM's bait and switch tactic exonerate any other company from the charge of using the same scheme?

        Microsoft probably has used bait and switch, though only in a moment of weakness, long regretted and publicly apologized. Must have ruined careers when discovered.

        But Starter Edition is not an example. First, the limitations were announced. And second, they are unlikely to crimp many of the people likely to obtain the software. Would a Starter Edition purchaser be likely to have hardware which can support having more than four applications open and working simultaneously?

        No, Microsoft has expressed deep regret for the company's past flawed behavior, shortly before celebrating new outrages, but Starter Edition needs no apology.
        Anton Philidor
        • No

          Why should identifying Microsoft's bait and switch tactic exonerate any other company from the charge of using the same scheme?

          IBM probably has used bait and switch, though only in a moment of weakness, long regretted and publicly apologized. Must have ruined careers when discovered.

          No, IBM has expressed deep regret for the company's past flawed behavior, shortly before celebrating new outrages.


          Try something more substantive next time. IBM is no more guilty than MS
          • Excessive praise is exaggeration.

            We know that Bill Gates used some... uncreditable tactics in building Microsoft. (Another good basis for comparing him to John D. Rockefeller.) So writing as if he were humbly regretful can be amusing. Or not.

            But my main point was that criticizing IBM does not exonerate Microsoft, and that appears to have been agreed.
            Anton Philidor
        • Time to basg Microsoft

          In the last thing I commented on you responded to my remark by pointing to the money M$ has spent on R&D. I know we argued about Disney once and I recently had to list my favorite animation films, which resulted even in my comments on Fleischer's Gulliver's Travels, in my saying several times that Disney knew how to spend money. Bill Gates does not. Your comment was closed to new responses so I shall say here that when I say I think we need more market research on operating systems I mean from researchers willing to not care whether the desktop is the best paradigm -- as Gates has said when criticizing OLPC -- and to not set ANY technical preconditions on satisfying user demands. In response to developer complaints, Nintendo created the less-powerful Game Cube. While the era of the play station dawned, Nintendo spent the time focusing on its new strategy to use the simplest technology to deliver a satisfying user experience, and now Wii provides more of a challenge to Sony and M$ without compromising those core principles.

          I don't know if the guy who told one ruined competitor who asked him what his mistake was "We're evil" is still working there. It doesn't matter. To hear all this public piety about the place is precisely disgusting because they can be ruthless competitors and they make no secret of the fact that when they feel threatened by something they are going to go full out on it.

          To hear "Linux is evil" from them is precisely disturbing because in the sense they meant it there are so many Linuxes. Vs. Starter edition you have Puppy, you have PCLinuxOS, you have Ubuntu, Kubuntu and Xubuntu... In that context I have used a Starter Edition machine and would never compare it to a Linux. On its own terms it stinks. It is barely functional and driven by Gates's vision that the only people who matter can afford the latest hardware. Some multigeneration removed copy of Win95 or 98 is FAR better for its intended market, because that software is more functional.

          I wouldn't put it past them to bait and switch. I don't think it's necessary to accuse them of that. They were convicted of monopolistic practices, the judge ordered a breakup and they had him thrown off the case. Then someone was elected who has been acting like Gates's lap dog as he shares his largesse with the Republican Party. So yes it's credible that Starter Edition is bait and switch and not that anyone lost their jobs if bait and switch was discovered -- not anyone involved.

          The bottom line though is that M$ is where it is because they are driven by marketing rather than by excellent coding, and do pose a threat to us precisely because they are working to make every other model for software development than Open Source unviable. As far as I was concerned, before the "Get the Facts" campaign began, they had already hurt Linux and proof of this was forcing me to turn to it to meet my needs. It serves too many gaps in M$ products and more are opening up as all their software gets less functional (without the latest hardware)

          In attacking Linux they are attacking a whole market segment. Get that through your head. If you are not exactly part of the market segment, this does not mean that schools, small engineering firms and other organizations who need efficient service but may not be able to afford the latest hardware don't make major contributions to our global economy. Their terminology is off, their thinking nearly non-existant, and the exact target they describe does not exist, but the fact is that their efforts to tie down control of the OS market and control of development for the OSes that exist is an attempt to hamstring ALL independent software development -- already they have got Mono inserted into Gnome, which runs on many unix X servers.

          Gates spends money on influence and market control. He does not spend money on his product, except in ways to force us to run our machines his way. Whether Starter Edition is bait-and-switch, it's Ubuntu's evil twin. The Linux distro is so much a Linux you can customize it into whatever you want with a few simple commands. The Windows version is so much a Windows they MAY offer improvements if someone convinces Redmond.
    • Well ...

      In this one Anton we agree.
      IBM is _the_ money extraction machine of the corporate world.
      They are not alone in this and they are also know to make very good solutions.
      So one should not confuse the overpriced under-performing hardware with their consulting services ... or more correctly their ability to make the right sub-contracting ... (let's keep it like that)

      Of course Anton ... usually with Linux or mainframes their systems do not crash in when the customer most needs it ...
      They also have a huge clout over Big Costumers. And it is likely to remain like that for a long time.
      I once thought about why is it? Why is it that once you mention the magic letters IBM ... the doors open?
      I think that big companies have the most to profit from IT systems, so the return of capital is something they do not care.
      Or, to be more precise ... they are long term business and the bigger the business the bigger the IT returns on efficiency to the corporation.
      So the point is they tend to look, from long time ago, at companies that can provide them with a assured long term solution.
      Still today IBM supports AS400 systems from 10 years ago with a performance that is much slower then a common 1000$ laptop ...
      But for some companies this is exactly what they want ...

      • Don't confuse ...

        ... the delivery van with the sports car.

        [i]"Still today IBM supports AS400 systems from 10 years ago with a performance that is much slower then a common 1000$ laptop ..."[/i]

        If I'm moving house, a slow lorry is more effective than a Ferrari. Fast is not necessarily better.

        The AS/400s may be slower than a PC but the internal bandwidth is huge and data can be moved very efficently. Because it does not waste its time messing about with GUIs even a slower processor can give very good performance.