Unix vs Wintel/DP: the cost/benefit issue

Unix vs Wintel/DP: the cost/benefit issue

Summary: An imaginary wall to wall Wintel/DP to Unix conversion produces what? Operating cost savings and a dramatic turn-around in IT organizational posture: from blocking force to business enabler.

SHARE:

I've talked to middle managers who sincerely believe that a PC costs about $300 and a 2,400 user system should therefore cost about $720K - before volume discounts.

Similarly there are DP people who'll tell you that a single free Linux license at $40K per year will support thousands of instances on one IFL - serving 2,400 desktops entirely on freeware.

And, of course, everyone knows that a Unix server costs a couple of million bucks to put in - and then fails all the time despite the hundreds of thousands it eats in annual consulting and support fees.

Such views are delusory - in reality, fixed infrastructure costs for Unix and Windows favor Unix (no surprise given that several free Unix systems run on Wintel hardware), while getting vaguely comparable performance on zOS/IFL costs at least an order of magnitude more.

The most surprising thing about this, however, isn't the nonsense many managers believe but that the cost differences we consider important at the personal level and within IT, really don't matter much for larger organizations: what counts at the enterprise level are the operating costs, risks, and limitations imposed by whatever infrastructure is used - not the start-up cost of that infrastructure.

Thus the mistake that matters when people believe obvious nonsense on Wintel/DP costs has little to do with the specific errors they're making and nearly everything to do with the hidden assumption that systems architecture is an IT issue with no significant consequences for the organization - basically the problem is the belief that a computer is a computer is a computer, and therefore that the choices between them may entertain the IT troops, but are of little strategic relevance in the boardroom.

Imagine, for example, that we have a 2,400 user government organization in which:

  1. about 30 people in the financial review unit think they absolutely must have the very latest Microsoft Office desktops;

  2. about 20 people regularly use Adobe publishing and pre-print tools;

  3. about 240 need regular access to highly customized PeopleSoft financial and HR tools;

  4. about 1400 have some need for basic WP/Spreadsheet and presentation software;

  5. about 2,200 need daily access to a custom tracking and scheduling application;

  6. about 2,400 claim to need browser access and email;

  7. there are about 70 other applications, many of them locally grown spreadsheet, BASIC, or SQL-Server applications with the usual lack of documentation, backup, or auditability. On average each of these is thought to have about eight users, one of whom has largely rebuilt his or her job around servicing it;

  8. there are an estimated 15 to 25 data transfer and/or remote identification "applications" (many are scripts) that are uniformly said to be mission critical but are documented only with respect to their original forms - not as they are today; and,

  9. there are an unknown number of third party applications running on various servers really known only to those directly involved with them - and many of those people detest IT and will neither willingly report what they're doing nor participate in a change management program.

Right now no user manager, however senior, can make more than truly minor procedural or program changes without working through a formal IT assessment, technology change management, and (often) budget management process first.

The actual cost of the infrastructure used to meet these needs with wintel/DP is largely unknown - the book values are pure fiction. The outsourcer managing the desktop evergreen program values each desktop at about $1,400 inclusive of core desktop licensing and first level support; but exclusive of server infrastructure and custom applications, including Peoplesoft. There are about 160 physical servers in data center racks with recently purchased units averaging nearly $9,400 - much of it in licensing. There's an external shared data pool with its own servers - and nobody in the data center seems to know how many network controller racks and/or devices exist because these are administered by yet another third party. Bottom line? the senior IT people give $3,000 per user as a working estimate (about $7 million in total) and while that seems a little high for current replacement cost given government wide licensing for some Microsoft and other products, it's probably in the ballpark.

If I were to imagine replacing all this with a Solaris/SPARC system I'd probably think in terms of four identical units -largely because the organization is about evenly spread across contiguous floors in two nearly adjacent buildings and the idea of putting systems into opposing top and bottom corners in both buildings appeals to me. Each would have four T5440 servers with a 7410 (dual controllers, Flash, 40TB, 4 x x86, 128GB) data store in a rack with a UPS, highspeed router, and some 64GB, 4 processor, x86 Wintel server - and each would list at a bit under $600K for a net 20TB of fully mirrored data storage, 144 network channels, and 1024 virtual 1.4Ghz processors accessing 512GB of RAM.

Users would get 22" Sun Ray3 displays - at about $2.7 million in total including smart cards and software.

All the existing networking gear, including cabling, would go - to be replaced by an all optical system with no more than three devices in the path to the user: about $300K.

About 30 users would get their own little PC network - about $70K inclusive (about half in licensing). Another 20 would get Macs - about $80K inclusive (about two thirds in licensing). In both cases the "outside" backup processor would be in the nearest data center rack and double as the primary processor for other users needing occasional access to PC or Mac software.

The primary application was first developed using COBOL/IDMS in the 70s, converted first to a Vax running C/Forms in 1991, and then immediately to Adabas/Natural on the government's outsourced data center, is currently running as a windows client application accessing SQL-Server, and has been the subject of numerous successful redesign and redevelopment projects - none of which reached production. Today it looks almost trivial to do as a web application: about thirty data entry screens, perhaps 200 embedded functions, a dozen or so reports, and no known live interfaces to other applications.

In moving this to Solaris/Sun Ray I'd probably get two independent teams to convert it to PHP with MySQL - say $60K each - and then quietly start a re-examination of what the thing is really supposed to do and how it does it, preparatory to looking for a commercial or open source replacement (Although, of course, if I were doing this for real, I'd make porting it with guarantees and no nominal cost a condition of the deal with the Solaris/Sun Ray vendor.)

Each of the hidden applications would have to be found, documented, and evaluated during the conversion. In practice most are trivial and conversion, while possibly traumatic for those most closely dependent on control of them, fairly easy. Accommodations can, however be made if warranted - remember each rack will contain an x86 server able to provide Wintel support if and where needed.

Most of the other software is free and the Peoplesoft stuff is expected to transfer essentially unchanged despite the extensive customization effort ten years ago. As a result, and allowing about another quarter million for "surprise" software licensing and adaptation requirements, the whole mess should come in at a bit under six million - very roughly in the same ballpark they are now.

So where are the differences if they're not in capital cost? They're all in how using these two systems drives organizational behavior.

With Solaris/SPARC and Sun Ray:

  • users get bigger, clearer, faster, displays - that make nearly no noise, produce nearly no heat, very rarely fail, and only get replaced about every ten years.

  • users get access to more software - it's actually possible to run Unix, Mac, and PC applications on the same screen at the same time (although I believe cut and paste only works between two at a time - not sure if that's changed recently.)

  • users get much faster and more predictable system response; more storage; almost total reliability, near total freedom from data theft, loss, or leakage risks; and the ability to access their personal desktop from just about anywhere at just about any time.

    Two notes on this:

    1. In the real organization this is very loosely based on, roaming access is considered critically important by many users, but is not allowed -nominally for security reasons. With the imagined Unix architecture providing secure access via iPads and/or iPhones to key data is trivial; and,

    2. if you time only the PC, the PC is usually a bit faster than the Sun Ray simply because 3GHz is faster than 1.4Ghz; -but if you time the system, the Sun Ray user wins because the system as a whole is far simpler and unbottlenecked. The classic example here is email: with Sun Ray, 2,400 users hitting their email at 8:16am would see no significant degradation, where today, the 30 or so exchange servers combine with network limitations to produce long delays during the first twenty minutes or so of every working day.

  • users are freed from operational responsibility for their desktop computers - including complete freedom to ignore the entire panoply of PC style "security" threats.

    This is a much more important issue than it may seem. At present PC users everywhere are subjected to daily doses of paranoia on "security" issues - and some in this imaginary organization we're dealing with here are concerned that data leaks embarrassing to the government will be traced to their PCs while others worry that perceived enemies could do things like install porn on "their" desktops.

    The Sun Ray is not a client - there is no desktop OS to interact with the application or on which variant applications can run; and the user card plus password, not the user's machine, identifies the user. As a result most of the disasters people imagine befalling them through desktop abuse simply can't happen.

  • users get complete clarity on software issues -because, with no local OS and/or PC networking to muddy the waters, any and all failures encountered running software are unambiguously due to that software.

    The key consequence of this that a particularly invidious PC usage effect is avoided: specifically companies installing enterprise class client-server applications generally find that each new generation of users is trained by its predecessors in both workarounds and magical thinking - with the result that each new generation uses fewer of the available features, and is less willing to experiment with the software, than their predecessors.

    With Sun Rays, however, you get the opposite effect: because people learn that they can customize their own environments and that experimenting with key software carries no penalty, organizations can expect their people to get better and better, rather than more and more constrained, with respect to the effectiveness with which they use the application.

  • change hassles essentially disappear as user or senior management issues. Users can make minor changes to their own environments as they wish and user management can change most processes and procedures with little or no consideration of IT issues - the sysadmin assigned to them can generally either make, or co-ordinate making, any required changes more or less on the fly.

    Similarly, IT management can test and then roll out global change (usually updates) with no risk and no complexity while additional or alternative strategic software can be added without worrying much about destroying existing data, infrastructure, or relationship values -and without significant impact on continuing usage of existing software.

Although these kinds of differences drive organizational value they're hard to measure but luckily there's a general rule that applies here: the simpler and more effective an engineering solution is, the lower its long term cost - and so when you go from the complexities of Wintel/DP's variation on the Rube Goldberg machine to the simple elegance of Unix with smart displays, you get measurable savings in IT operating expense.

  • the help desk disappears. In this particular organization that's a thirty FTE outsourced contract gone.

    (The biggest effect of disappearing the help desk, however, has nothing to do with the help desk: it's a behavioral artifact that arises because desktop users cannot easily tell application software failures from personal, desktop, or network delivery failures and so come to rely on the help desk to sort out a lot of application how-tos. Since the applications tend to support professional or quasi-professional activities and help desk staff rarely know the ins and outs of those activities, this tends to force regression to the simplest, and least effective, use of the applications.

    With Sun Rays, however, there's never any ambiguity about this, so lead users, generally people who are knowledgeable and enthusiastic about the application, provide application usage assistance to newbies - with the result that the value of the organization's IT investment rises even as its costs go down.)

  • Most of the day to day operational activities in the wintel data center disappear - along with the data center floor space, about 50 IT staff cubicles or offices, and about 200 watts per desktop per hour in user spaces.

    Instead the organization will need a total of about 1200 square feet of cooled (and sound isolated) rack space for the four linked centers and about five people, with offices, to run the whole thing, including over 2,300 Sun Rays.

    In addition they will need a CIO and some sysadmin/DBA skilled user interface people posted in user groups - 16 (plus a wintel person in the PC center) in our imaginary case here.

    The user posted sysadmin/DBAs have both the most difficult and the most rewarding IT jobs here because their job is to understand user needs and meet those needs using the infrastructure in place.

    Thus the hardest part of the CIO job in this environment is to recognize the obvious: these people need to be treated as craftsmen, not as technicians - and so must be empowered to make, and act on, systems decisions affecting their users. (Remember: the goal is to provide centralized processing as a service, while decentralizing control of that processing to user groups - effectively creating many local IT departments all of which share both the infrastructure and a kind of team standard. Next week's blog entry: Job descriptions in the Unix Enterprise, expands on this.

    Given cross training, fill in, and special projects requirements the bottom line is that staffing drops from around 80FTEs plus the evergreen contract to no more than 30, inclusive -and for that your users get what they don't have now: someone who works with them every day, understands their concerns, evangelizes the system, and is empowered to make changes on a day to day basis.

  • You're well advised to pay for full hardware support (including software upgrades) but total costs for that are about those for the wintel infrastructure - except that the cost of the desktop evergreen program disappears and that's 1,100 new PCs a year you don't pay for.

    The big difference here isn't in the dollars saved: it's in the simplicity of a change process affecting a few $80K machines per year instead of two or three $3K machines each week - and five or so $1000 machines every day.

Look at the whole thing on net, and the tangible savings from dropping client-server in favor of Unix with smart displays come mainly from staffing reductions, with some bonus monies coming from savings in annual software licensing and upgrades - but this is just the ginzu knives effect: the real value here is in the intangibles: in increased user productivity, in decreasing turn-over, in the elimination of most forms of software security risks, and in the near total elimination of system failures.

Thus the bottom line here is simple: the Unix system costs about the same as Wintel client-server to put in, but costs a lot less to run - and works significantly better on all the parameters users care about.

Topics: Oracle, CXO, Storage, Software, Servers, Operating Systems, Open Source, Networking, Hardware, Data Centers, IT Employment

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

50 comments
Log in or register to join the discussion
  • The bottom line here is....

    ZZZzzzzzzzzz........ZZZzzzzzzzzz.......ZZZzzzzzzzzzz....
    junknstuff@...
    • Please explain why this is zzzz

      @junknstuff@... I don't understand how this vision can not be exciting to you - please explain - I personally have been waiting for this sort of architecture to catch on for years - isn't what is really boring having to support all those pc desktops?
      scottedwards2000@...
      • RE: Unix vs Wintel/DP: the cost/benefit issue

        @scottedwards2000 This is a variation on Rudy's Solaris/SPARC/Sun Ray wet dream article he keeps rehashing again and again and ZZZzzzzzzz.... ZZZzzzzzz .... ZZZzzzzz
        junknstuff@...
      • RE: Unix vs Wintel/DP: the cost/benefit issue

        @scottedwards2000@... Just because you are in love with the OS doesn't mean it needs to be supported forever. The world needs to move on. Have fun running XP, but the rest of us are looking towards the future, not the past. <a href="http://www.arabaoyunlarimiz.gen.tr/cizgi-film-oyunlari/generator-rex.html">generator rex oyunlari</a>
        Arabalar
    • RE: Unix vs Wintel/DP: the cost/benefit issue

      @junknstuff@... I agree.
      ItsTheBottomLine
      • Ok, fair enough, how about a detailed rebuttal (or link to one)

        @ItsTheBottomLine ok, I'm new to this blog, so didn't realize part of the objection was just excessive repetition. Coming from the super-annoying pc world though, with all its support nightmares, I'd love to understand what is so great about giving all your users pc's. I'm not being sarcastic - I really would like to understand the advantages when sun/oracle has products that can virtualize windows/osx - is it just video/audio performance? I mean most users in my org can't install anything anyway, so what's the advantage? I just see lots of disadvantages (viruses, spyware, no backups, etc).
        scottedwards2000@...
    • RE: Unix vs Wintel/DP: the cost/benefit issue

      @junknstuff@... Well this article that i?ve been waited for so long. I require this article to complete my assignment in the college, and it has exact same topic with your article. Thanks, amazing share <a href="http://www.genericdruglist.net/">generic medicines</a>,<a href="http://www.genericdruglist.net/blog/">Medication List</a>,<a href="http://www.genericdruglist.net/blog/muscle-relaxers.html">Muscle Relaxants</a>,<a href="http://www.genericdruglist.net/blog/pain-drugs.html">pain drugs online</a>,<a href="http://www.genericdruglist.net/blog/erectile-dysfunction-drugs.html">erectile dysfunction</a>,<a href="http://www.genericdruglist.net/blog/arthritis-drugs.html">arthritis drugs</a>,<a href="http://www.genericdruglist.net/blog/weight-loss-drugs.html">Weight Loss Drugs</a>,<a href="http://www.genericdruglist.net/blog/antiviral-drugs.html">Antiviral Drugs</a>,<a href="http://www.genericdruglist.net/blog/antidepressants.html">Antidepressants</a>,<a href="http://www.genericdruglist.net/blog/allergy-drugs.html">Allergy Medications</a>
      Peter38
    • RE: Unix vs Wintel/DP: the cost/benefit issue

      @junknstuff@...
      Thanks for the interesting article, i will bookmark and come back soon.
      [url=http://www.aussie.com.au target="_blank" rel="dofollow"]home loans[/url]
      gabwilliams
  • I'm mostly sold, except...

    ...whatever provisions this proposed Unix system might provide for datacenter-level showstoppers. Take the following: I work for a contractor (non-IT) serving a major manufacturing outfit. About a month ago, the centralized SAP datacenter of the parent company somewhere in Switzerland suffered from a massive data corruption incident, resulting in a complete blackout of European outfits that lasted a little more than four days. Operations across the old continent came to a grinding halt as end-of-cycle data was attempted to sync to Switzerland to no avail, and the only outfits that could barely perform were those in Eastern Europe/Middle East where the otherwise despised practice of collecting everything at local and syncing in bulk (in this case a bloated web interface coupled with a local SQL server) saved the day. Not to mention all the workstation-level data crunching by the good old Excel in fact allowed for a quick recovery of Finance/QA/PP operations after the hiccup.<br><br>Now, the question remains is that whether a virtualization-based Unix solution can combat this effectively using triaged redundancy - which is expensive and could possibly turn hideously political - plus a gradual shift to better practices by replacing the inherently powerful workstation by easy-to-scram stuff that'll hopefully prevent a single dumb*** from taking down a whole datacenter, or whether it has some other secret weapon that I don't know of that'll mitigate the few instances where the availability of offline systems is the only way to sustain operations.
    Stormbringer_57th
    • Disaster avoidance

      @Stormbringer_57th

      The two most basic responses to this kind of threat are to:

      1 - cross train all of your people while assigning everyone primary responsibility for something; and,

      2 - establishing at least two data centers with different management set up to ensure that data is stored at both sites and the system as a whole continues to work even if one is completely destroyed.

      One odd thing: as we get into TB database sizes and life/death corporate data values it's becoming increasingly obvious that we're going to have to move to triple redundancy for this - just as people making the larger DBs are now starting to offer 3-way mirroring.
      murph_z
      • Triple redundancy, summetrical failover only real low risk solution

        @murph_z <br>Having recently got into the issues around disaster recovery, I can see that triple-redundant, failover to full working site, topologies are the only safe ones.<br><br>If disaster strikes one site, having only one working site means a business becomes much more exposed. Many enterprises only have one-way failover, and often to lower capacity, leaving them partially cripled and with a long period of risk exposure as the failed site is reinstated mostly manually.
        Patanjali
      • No DR is the best DR

        @Patanjali

        The best DR is NONE AT ALL. What I mean is that the "traditional" DR strategy is full of manual processes (like BMR, bridge lines/conference calls, panic, etc.) which are just unnecessary. With fully autonomic systems, any interruption can be automatically mitigated. Either you have active/active setups (like Oracle DataGuard), or you have systems that can be re-purposed automatically (like DEV servers). Yes, running production on DEV servers is a serious crimp in your business - but the alternative is to have expensive hardware sitting around doing nothing - waiting for a disaster. No one is willing to pay for that.
        Roger Ramjet
  • Thin Client is a necessity in the Health Care sector

    because of U.S. Federal HIPPA but the resultant benefits outweigh the use of Fat Clients. And those with 'special needs' get a VM to play with, but all is centrally managed and all of the remote on-site support cost approaches $0.

    It's the mainframe era of the 70s all over again but with smarts.
    Dietrich T. Schmitz, ~ Your Linux Advocate
  • "Today it looks almost trivial to do as a web application"

    put cloud in every sentence you used sun in and you've got it.
    sparkle farkle
  • It's amazing how few IT folks understand these concepts...

    They still think hundreds if "cheap" servers have lower TCO than a few, more expensive, large ones.

    Would much rather manage a couple of mainframes or large UNIX boxes than hundreds (thousands?) of "Wintel" servers.

    Problem IT operations has is that until the "cloud" age, too many apps were locked into Wintel platforms. Developers loved it - and developed to it. Even my 9-year old can create visual studio apps!
    Plain Logic
    • There's no difference

      @Plain Logic <br><br>between hundreds of cheap servers vs. hundreds of VM instances. To a sysadmin, it takes the same amount of effort to administer them. The cost is the same as large boxes tend to be quite expensive. The performance is the same (usually the same number of cores) - although there is a hit for the VM overhead. The failure rate can be very similar. But I was only comparing the same OS in both scenarios.
      Roger Ramjet
      • RE: Unix vs Wintel/DP: the cost/benefit issue

        @Roger Ramjet <br><br><i>between hundreds of cheap servers vs. hundreds of VM instances. To a sysadmin, it takes the same amount of effort to administer them.</i><br><br>From an infrastructure perspective, you are incredibly off the mark. Its vastly easier to manage virtualized infrastructure than physical infrastructure. <br><br><i>The cost is the same as large boxes tend to be quite expensive</i><br><br>Again, way off the mark. With todays iron built explicitly with virtualization in mind consolidation ratios of 50-60 to one are the norm. It doesn't take a rocket scientist to see that its cheaper to buy 1 $40k machine and run 50 images on it, rather than spending $3500 per server per image. And that's BEFORE floorspace, power, network and HVAC.
        civikminded
      • RE: Unix vs Wintel/DP: the cost/benefit issue

        @civikminded<br><br>[From an infrastructure perspective, you are incredibly off the mark. Its vastly easier to manage virtualized infrastructure than physical infrastructure.]<br><br>Sysadmins must do tasks. They have a number of "machines" they need to run these tasks on. They also have tools to help (including home-grown scripts). There is no difference between physical and virtual from this point of view. Maybe different tool sets - that's it.<br><br>[Again, way off the mark. With todays iron built explicitly with virtualization in mind consolidation ratios of 50-60 to one are the norm.]<br><br>I don't know what you mean by "built with virtualization in mind". A nice UNIX OS can handle multiple CPU/cores better than a convoluted straitjacket of context-switching slow-@ss virtualization software. I have seen first-hand how some customers beg for a "real server" when their VM proves inadequate. In a perfect VM world - that should never happen.<br><br>[ It doesn't take a rocket scientist to see that its cheaper to buy 1 $40k machine and run 50 images on it, rather than spending $3500 per server per image.]<br><br>That's circular reasoning. I can run 50 applications on that server without VMs. UNIX was designed to do that.<br><br>[ And that's BEFORE floorspace, power, network and HVAC.]<br><br>I would use the exact same hardware - without the VM crap on it.
        Roger Ramjet
      • RE: Unix vs Wintel/DP: the cost/benefit issue

        @Roger Ramjet

        OK. Try to put even 10 SAP instances on one OS image. Go on. I dare you.
        civikminded
      • RE: Unix vs Wintel/DP: the cost/benefit issue

        @Roger Ramjet and @civikminded

        "OK. Try to put even 10 SAP instances on one OS image. Go on. I dare you."

        Before Murpy answers scotth_z I am waiting for him asserting Solaris10's containers to be able to do exactly that by leveraging the capabilities of ZFS
        rikkytikkytavi@...