Forces of nature

Forces of nature

Summary: Answering sparkle farkle - with a detour through history that should be about 10,000 words long but isn't.

SHARE:

Last week's blog drew this from "sparkle farkle":

Convinced of my ignorance yet again..

Aside from the play by play, could you provide an example of how the DP method differs from Unix, as a theoretical. I don't understand how the user could be in more control by using a centrally controlled system where IT is involved (I have to assume re-writing or modifying) someone's program to suit, rather than installing (and letting the user configure) a program they already know. On the client side (the user) has a long road to re-learning or learning software in the first place.

I could cite a program like photoshop, and it's linux counterpart Gimp. there's no way the two are the same. (I will admit I'm getting better at using the Gimp, but just not the same in terms of ease of use and functionality) In a creative environment, there is not a real replacement for photoshop, or Autocad in it's many incarnations.

So you are left to virtualize, spend on licensing for the whole business etc. How could a centrally controlled entity make this any better?

Here's my summary response:

In brief: wintel, like DP, forces centralization of both processing and control. Unix allows (but doesn't force) people to separate these: centralizing processing while decentralizing control. Do that, and running IT (i.e. the gear etc) becomes a part time job, while IT people working inside user communities work for those users in making things (i.e. software) happen.

But, of course, there's more to it - specifically, things got to be as they are through the processes of history.

About the time data processing was getting started, organizational design people were enthralled by one Frederick Taylor, a popular exponent of "scientific management" whose view of the worker as an undifferentiated, and thus easily replaced, cog in the corporate machine:

Under our system a worker is told just what to do and how to do it. Any improvement upon the orders given him is fatal to his success.

applies reasonably well to the kind of unskilled labor he studied, but becomes increasingly counter-productive as both task and organizational complexity increase.

In his own context, directing highly repetitive activities like shoveling coal into furnaces, Taylor's ideas worked; but what really sold them into social prominence was the implied moral and social distance between those directing activity and those who carried it out. In the crudest terms: he sold management on the idea that the managerial class was so much smarter and more capable than the workers that they had both a right and a moral obligation to direct every detail of the worker's lives.

The reality, of course, is that Taylor over generalized from observations at the lowest end of the work complexity scale: basically the simpler and more organizationally isolated the task, the more applicable Taylor's liberal-fascism becomes - but the more complex and organizationally inter-linked a task gets, the more counter-productive attempts to apply it become. Thus Henry Ford usefully applied Taylor's ideas to individual work stations on the assembly line, but no five year economic plan produced by an economic dictatorship anywhere in the world has ever come anywhere close to reality.

You can see how Taylor's ideas were attractive to the men running Finance departments after world war I: they bought IBM's machines, hired clerks to execute the individual steps in data processing, and hired Taylorites to make sure that those clerks did their jobs, and nothing but their jobs, in wholly predictable ways optimized with respect to the most expensive resource: the machines.

Forty to fifty years later, people who made their bones in that system faced an organizational transition to virtual machines: from physical card sorters controlled by switch settings to card image sorting controlled by JCL - and just continued doing what they knew how to do.

The biggest external enablers for this continuity were cost and ignorance - the latter because then, as now, finance people simply didn't want to know what went on the black box labeled "data processing", and the former because cost continuity reinforces expectation continuity. Thus in the 1920s a line capable of end to end records processing for AR, AP, and GL cost about the equivalent of four hundred clerks hired for a year- and in 1964 so did the first 360 installations, while the typical $30 million zOS data center today is not far off that same 400+ full time equivalent cost.

In the 1920s that cost drove the focus on utilization, the role in Finance drove isolation and arrogance, and the combination of Taylorism with the after the fact nature of data processing both reinforced the other factors and enabled regimentation - both inside data processing and in its relationships with users. None of that has changed since: a DP manager magically transported from the 1920s could absorb the new terminology and carry on in most big data centers today without changing a single operational or behavioral assumption.

When Wintel started, things were very different: there was a booming personal computer industry, Sun was inventing the workstation, Apple was making the Lisa, science ran on BSD Unix, traditional research ideas about open source and data were widely established in academia, and thousands of large organizations were in the throes of conflict between traditional data processing management and user managers successfully wielding computing appliances from companies like Wang, Honeywell, DG, DEC, and many others to do things like document processing, job scheduling, and inventory management.

When the PC/AT came out user managers facing ever increasing corporate barriers to the purchase of appliance computing leveraged data processing's IBM loyalties to buy millions of them - only to then discover that there was little useful software for the things. That then created the markets and contradictions allowing Microsoft to succeed - and led directly to the 90s PC server population explosion with all its consequences for IT cost, security, and performance.

Those costs and consequent failures forced centralization: first of control and then of processing; until, today, most data processing wears a Windows face but is behaviorably indistinguishable from what it was in the 1920s - and the software, course, has evolved in parallel to make locking down the corporate PC to imitate a 327X terminal the least cost, lowest risk, approach to corporate "client-server" management.

Thus the bottom line on the merger of the DP and Wintel traditions in larger organizations is that any move away from centralized processing (whether implemented on zOS or wintel racks) adds both costs and failures while any move to decentralize control (letting users, for example, manage their own software) does the same.

None of this applies to the evolution of the Unix ideas: from the beginning science based computing has been about using the computer to extend, not limit and control, human communication and human abilities. Thus users are perceived, not as data sources for reports to the higher ups, but as members of a community of equals - and it's that perception of the role of the computer as a community knowledge repository and switch that ultimately drove the evolution of open source, large scale SMP Unix, and network displays like the NCD-X terminal then and the Sun Ray today.

There are both cost and control consequences to this for commercial use of Unix: the organizational data center that takes a zOS machine or several hundred PC servers to do the DP way, takes two or four SMP machines and costs an order of magnitude less to do with Unix. Neither the us vs them mentality from data processing nor the software functional differentiation that gets pushed all the back to the hardware in the Windows/DP world, exist in Unix - and open source ideas limit both licensing and development commitments. As a result processing centralization both minimizes system cost and maximizes the resources available to users without requiring control centralization.

It is possible (and common), of course, to be stupid: insisting on the right to run Unix in DP mode by doing things like restricting staff roles, tightly controlling user access, customizing licensed code, or paying for software to chop that big machine into many smaller ones. Thus the people running the organization I've been talking about over the last few weeks would, I'm sure, respond to an externally forced march to Unix by combining virtualization with both processor and job resource management to increase the negative impact of the limits and problems they face with Windows. Right now, for example, the 2,000 or so users who check their email between about 8:16 and 8:30 each morning completely stall out the 20 or so dedicated Exchange Servers and much of the network -and while none of this need happen with Unix, the unhappy reality is that everything these people know about running IT would lead them to spend money making things worse than they are now.

The point, of course, isn't that poor managers can't implement DP ideas with Unix, it's that good ones know they don't have to. The cost and risk forces that drove the adoption of DP ideas among wintel people simply don't apply: so giving IT staff posted within user groups the authority to act immediately on user requests falling within some global IT strategy offers significant corporate benefit without incurring the costs or risks this would entail with a wintel/DP architecture.

Note:

Sparkle farkle mentions two specific pieces of Wintel software, PhotoShop and AutoCad, as forcing wintel adoption. In some situations he'd be right: if, for example, you had 2000 users and 1900 of them routinely required autocad, then you'd probably find the Unix smart display architecture a poor solution - but if you have the more normal thing: 2000 users, 30 of whom routinely use autocad, then you need to remember that you're there to serve users, not to create and enforce computing standards - and so you give them what they need: a local wintel (or Mac, if it's photoshop) ecosystem all their own, complete with embedded support working directly for group management.

On the positive side, most of the costs of wintel, particularly those associated with staff regimentation, security, and software churn, rise super linearly with scale - so putting a bunch of "foreign" system islands into your Unix smart display architecture ultimately adds relatively little to the corporate IT bill - and remember too that users who only need occasional access to monopoly products like Autocad can be given that on their regular smart displays at no more than the same server and license cost the wintel people face.

And, finally, he also suggests that a Unix system requires a lot of code customization. This is not generally true: outside of research organizations most large Unix systems run unmodified commercial or open source applications - most original code does start on Unix (particularly Linux and MacOS X these days) but that's because it's a natural development environment. In non research use code development and customization expense is almost always associated with the Wintel/DP mentality and rarely found in Unix budgets put together by Unix people.

Topics: Software, Open Source, Operating Systems

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

23 comments
Log in or register to join the discussion
  • still confused... but I'll posit here

    I believe that what you're getting at is the user can access the entire power of a larger machine if necessary, and this enables administration to streamline tasks based on patterns of usage, or demand. If everyone gets e-mail between 8 and 9 am then the machine is utilized for that task, and not "configured" for another task that is not being performed. If at 9 am everyone moves onto the main business of entering data,or mining data, the machine can be tasked to accomplish this in the most efficient manner, so that the user has determined how the system is being utilized just by their actions, and the administrator can intervene only if necessary. On the wintel side, your machine is a finite resource, probably underpowered for the task, and leveraging software and some server, that is overworked, to try to get work done.
    The administrator has no alternative but to configure access to the server or servers to balance load etc.

    In the end it seems as though the wintel system is underpowered for the job, and the Unix version is more than capable, and can be upgraded at one point rather than many in the DP example.
    sparkle farkle
  • to go just a bit further....

    It is not that the user machines are underpowered per-se it is that the power of the user machines is unusable by the job at hand, that is there are thousands of unused and unavailable cores for the task at hand, yet these take precedence over the center (servers) since each user has no choice but to want the biggest and best for themselves, since they believe that this will help their productivity, and management is forced to upgrade and maintain the client instead of the engine (servers and backbone etc)
    sparkle farkle
    • Let the scheduler be the scheduler

      Yes - but this is only a small example of the difference
      in operation - One i cited because most DP/Wintel people will actually pay money to subvert it.
      murph_z
      • is it the hardware or the software??

        It could be argued that the machines at the center (large server farms) arrayed in vast vaults of intellegence, could perform the tasks at hand. Will the cloud finally bring us into the more perfect world of balanced computing, where it's a win at the edge woth the user having a powerful enough machine that they can <u>trust</u> to keep thier secrets, and a center powerful enough to supply the user with data, at human speeds, and fast enough to crunch large processing requests at the same time. It seems like it's the same model, we use the internet, we talk to our servers, we process our data. But it's the software that makes a difference. Unix (Linux) powering most servers is a no brainer, why they haven't made progress on the desktop is obvious, no drivers, and few programs in the mainstream that work. Having grown up on windows, I understand it, it works, and it's expensive, for what it actually does. Mac is just beyond any comprehension in price but it does seem to work nicely. Sun has dropped out of the fray as close as I can tell except for that damned lawsuit........
        sparkle farkle
      • Software

        It's always ultimately the software that implements ideas about how people interact.

        Bear in mind that VM and Unix both address the same problem: how to safely share large machine resources - but VM assumes repetitive tasks done in isolation and is the 1916 row-of-machines taylorite solution done in hw/software, Unix assumes lots of task changes affected by a community of users and implements academic ideas about collaboration and knowledge sharing.
        murph_z
  • Very Interesting Posting

    I wonder: Do the same principles apply to Cisco servers?
    nbahn
  • RE: Forces of nature

    Thank-you so much for an erudite lesson on how things are moving away from centralized command and control methods.
    Agnostic_OS
  • If you don't like history, you rewrite it

    Another column full of sound and fury, signifying nothing.

    You really do need another hobby Rudy.
    tonymcs@...
    • RE: Forces of nature

      @tonymcs@...

      Your exactly right. It seems like history has decided these topics 20 years ago. Most of us have moved on with our lives and prefer to deal with reality.
      civikminded
  • I can see your point about vmware

    Sharing resources (or cores) seems to be the main reason vmware and other virtualization schemes fall short. You dedicate some number of cores to the environment, and chop up your fancy machine. While I'm not sure that you need a mainframe to achieve this, Ibm and others seem to be heading in that direction with smaller less expensive scalable alternatives. Severs themselves have as many cores as older mainframes. I recently read an article on zdnet about the proposed system of atom processors, to save energy, but what it really does is quantify the power that is available to the virtual environment. It really doesn't take much to spool data from a hard drive.

    The downside is server side processing,( which is where the current model becomes a lot more centralized). I think that murph does have a point, when you start creating applications, and online intellegence, and atom processor or something that has the power of one is going to run out of gas pretty fast. Why we need containers for instances of apache, and why we need to carve up larger computers for "security's sake" is a shortcoming in all the available software, and really points to the larger fragmentation problem across the internet structure.

    The "standards" being developed (such as html 5) have so many vested interests, that they aren't standards at all, but some sort of advertising, and a method to lock up content by software suppliers. Paid for codecs, proprietary and patented processes, all make sure that you will be buying someone's software or paying a toll for whatever you write. The vmware model is another tax collector, robbing the power of the internet, rather than creating a collective supercomputer.
    sparkle farkle
  • Sharing resources? no wonder you can't figure out hipervisors

    Murphy... you show how clueless you are about hipervisors such as z/vm, zen, vmware. They are about sharing resources when you have a REASON(technical or political) for keeping systems separate. z/os is far more advanced about running divergent workloads together on the same OS, than the UNIX systems(can Solaris sharing the same disk drives(not splitting the disk drives), sharing memory(not splitting memory) with separate machines..out of the box, good luck). So tell us WHY most z/vm shops(90+%) would run z/os underneath it... many for 20+ years.

    Here are the reason's why hipervisors packages are so popular with everyone but you...
    1) to avoid a costly and untimely integration ... How much does it cost in labor for someone in your world to go through 2000000 lines of code to make sure there isn't a naming conflict or resource conflict... How about if an application is running on a downlevel OS. Testing in COMBAT at 2am, when an application MUST be up at 3am,during a merge... is not acceptable to most responsible shops.
    2) merging divergent workloads.... How does your flavor of UNIX integrate a z/os workload or a Windows workload... not very well, I would imagine... How about when you have a 3 hour migration window?
    3) now for some political aspects... How do you keep your system programmer/security administrator from accessing your previously secure(from them) payroll system. How do you keep one set of programmers with equal authority from accessing one anothers application systems.... How about if divisions(or the CEO) want to keep their workloads separate for a reason... Billing... to spin off a division... or future physical migration to another location...
    After you think about this... now do it with little to no labor costs...

    Oh... and by the way... guess what, Oracle is moving towards virtualization/consolidation,too. They announced it last week. How does your ideal of running a lone wolf separate system at a remote site, align with their statement of direction.

    Sort of hard to consolidate Oracle(err..Sun) equipment and align with Oracle's vision of how Oracle(err..Sun) equipment should be run, when people like you, don't want to run equipment in a centralized location.

    http://www.zdnet.com/blog/btl/oracle-preps-virtualization-strategy/38151?tag=content;search-results-river
    scotth_z
    • umm.. Scott?

      When you write "when people like you, don't want to run equipment in a centralized location." you show how carefully you read an article (and a series) in favor of centralizing processing.

      Guess you didn't understand it any better than you understand what Solaris containers are or how market forces affect technical evolution.

      Let me help on the latter: when an idiot insists that he won't buy a $5 million machine unless he can cut it up into 100 machines whose performance could be bettered for $1,000 each, the right answer isn't to educate him but to take his employer's money.
      murph_z
      • RE: Forces of nature

        @murph_z
        Do you even know the difference between centralize location and centralized computing... seems not... since you NEVER answer what people should do...with Windows and z/os systems... Until you answer that simple(but profound) question , you really aren't qualified to pass yourself as some enterprise computing guru, who can dictate what people should do with their computing systems, can you.

        You keep playing the same broken record... uh... uh...run it all on Solaris, run it on Sparc... pretty brave talk... for a company/person that wants their CUSTOMERS to take all the risk in a conversion, while not making much attempt at helping their customers convert. Meanwhile.. HP and IBM have both publish help manuals to help Solaris customers convert to Linux.

        So unless and until you answer with something realistic on what customers should do with their z/os systems, you are just another guy with an agenda, who is now talking a desperate game, who has no clue why Sun went down in flames and was bought out for chump change.

        Most large shops are made up of multiple platforms.. multiple OS's. CXO's know that spending money for a risky conversion is not a very cost effective way to spend money and could cost them THEIR jobs, if not jail time. The business computing environments have discovered that it is much more effective to put their front end new development on either the most cost effective or the cheapest solution... and to phase out the rest(aka.. make legacy)... This has been the case for the last 15+ years... You failed to grasp this... because while Solaris/Sparc was both the cheapest and cost effective in the 90s... they no longer are now. IBM didn't understand this in the 90s and paid a steep price with the decline of mainframes in that decade. Solaris/Ultrasparc is no longer the most cost effective or cheapest way to add new development to an enterprise... IFL and x86 engines are now, and Ultrasparc suffered a similar fate over the last decade as mainframes in the 90s. Your desperate plea that people need to put all their stuff on Solaris... is the same tune that traditional mainframe people in the 90s put out and is doomed to fail. Understand that the growth of Sun in the 90s wasn't because of conversions off mainframes... it was because of the move of new development to cheaper platforms. A path that Solaris on Sparc no longer can claim.

        Even now mainframes still have 70% of the business critical data(aka.. not email)... after 30 years of UNIX people saying to convert off it. Considering that data growth has more than quadrupled in that time frame... mainframe data has grown over that time frame, not shrunk, so it isn't conversion that is going on here. Mainframes aren't going away any time soon... they have grown in presence over the last decade.

        The reason why DP methodology that you so despise is around and growing is that CIO types understand that systems need to play nice with each other in the 21st century. The ivory tower days are over.
        You fail to grasp that from your enclosed Sun system. Most developments are multi-OS in design, nowadays... Ever hear of mashups... that type of technology is not going away and is growing faster than Linux. Your perceived perfection of Solaris is the death of Solaris... this is the same perception that VAX and Tandem people had. People won't pay money and/or take the risk to convert... and never have. So it's either play nice or die for legacy systems and Solaris is now a legacy system.

        YOU are the one that doesn't understand why mainframes have made a major comeback in the last decade, so don't even think you understand 'market forces', you don't. The rest of the industry has figured it out... while Sun has 7% and is dragging in 4th place in server revenue. It's not because of 'marketing'. Here's a hint too... most mainframes aren't costing $5m.. that is the maxed out cost that you are pushing ... Most z/os only systems are in the 500K to $2m range..( a mainframe starts at $44k or so)... but with Linux, the cost factor is totally different... $44K to run those hundred images for mainframe hardware. no extra costs for infrastructure, hardware support contracts... better intra network performance, etc.. Existing mainframe shops have found Linux/IFL engines more cost effective than Solaris/Ultrasparc for new development.

        Of course, the mainframe model changed last month... so it is more like 200 images for $44k in hardware.

        Of course if you want to use YOUR logic... you can run Windows on a $300 pc... does that make someone an idiot for spending hundreds of thousands of dollars for a Sparc system?
        scotth_z
  • So Tell Me

    So your argument is basically that VDI (or whatever you want to call it) is GOOD, except we should be running Solaris as the client OS vs Windows or Linux.

    When you boil it all down it seems that ultimately this is your contention.
    civikminded
  • I have to agree with Murph

    Murph did a good job of explaining the origins of today's computer environment. His was a (relatively) dispassionate examination of the forces involved in this arena. He explained the why, where, who, what and how we are where we are today.

    The only rebuttals I have heard fall into 2 categories:

    1) Attacking the blogger's reputation/motivations.
    2) Saying that the existing infrastructure is better because - well because it's what EVERYONE is doing (i.e. popularity).

    Neither one does even a barely adequate job of pointing out where Murph is wrong.

    In truth, this centralized computing, decentralized control can be achieved with other OS combinations than just Solaris/DumbRay. But I don't see how it can be achieved without some sort of UNIX infrastructure. I've proposed before a distributed architecture where every box is running some sort of UNIX (client and server boxes) - with the SAME CONFIGURATIONS. This allows for Autonomics (at least Self-Configuring) - which is beyond what anyone can achieve today (it took me 1 script). You can have your "cloud" (unlike DumbRay architecture), and everyone has their own computer (not a DumbRay). I believe that my architecture is superior to Murph's - but it does follow the centralized compute/ decentralized control model the Murph is advocating.
    Roger Ramjet
    • RE: Forces of nature

      @Roger Ramjet

      See my rebuttal above. The only thing that Murph doesn't like when it comes to centralized desktop infrastructure is using Windows as the client facing OS.

      Tell me what makes running Solaris as the client facing OS crucial to success? Why WOULDN'T you run Windows?

      The IT manager that tries to shoehorn their own pet architectures into somewhere they have no place being is far more dangerous than any perceived threat Murph has ginned up.
      civikminded
      • I was talking about your rebuttal

        @civikminded <br>[See my rebuttal above. The only thing that Murph doesn't like when it comes to centralized desktop infrastructure is using Windows as the client facing OS.]<br><br>You can run Windoze through DumbRays.<br><br>[Tell me what makes running Solaris as the client facing OS crucial to success?]

        It's not.

        [Why WOULDN'T you run Windows?]<br><br>In my architecture, Windoze cannot achieve Autonomics because of the Registry.<br><br>[The IT manager that tries to shoehorn their own pet architectures into somewhere they have no place being is far more dangerous than any perceived threat Murph has ginned up.]<br><br>Exactly! Especially the IT manager that parrots a certain OS JUST because "everyone else does it".
        Roger Ramjet
    • RE: Forces of nature

      @Roger Ramjet

      [i]You can run Windoze through DumbRays.[/i]

      Sure.. what do you need on the back end? VMWare ESX. Wait.. or VirtualBox (I'll go on with this post when the laughter dies down)

      All your 'Autonomics' can be achieved on Windows with Virtual Desktop Broker apps that have been available for 5-6 years. In fact with VMWare View, I can serve out thousands of desktops from a single OS image. No, not thousands of copies of a single OS image. 1 single OS image.
      civikminded
      • Rose colored glasses sitting in an echo chamber

        @civikminded

        [You can run Windoze through DumbRays.

        Sure.. what do you need on the back end? VMWare ESX. Wait.. or VirtualBox (I'll go on with this post when the laughter dies down)]

        Sun has had many different SPARC -> PC offerings over the years - Emulators like SoftWindows (and now Virtualbox) to PC's on cards (hardware). Most UNIX boxes used Citrix for their Windoze stuff (that works over DumbRays).

        [All your 'Autonomics' can be achieved on Windows with Virtual Desktop Broker apps that have been available for 5-6 years.]

        Nope. I don't think you understand what Autonomics is. A Self-configuring OS is not the same as virtualization that can activate a VM copy.

        [In fact with VMWare View, I can serve out thousands of desktops from a single OS image. No, not thousands of copies of a single OS image. 1 single OS image.]

        And if that image becomes corrupt? And "serving" desktops need to actually been seen on desktops - what hardware do you use at each client desk?

        Servers are MUCH more expensive than desktops - which means that using servers to virtualize desktops is the highest cost "solution". It makes absolutely no sense to me to do this. Yours and Murph's "solutions" have the same origin - Windoze on the desktop is expensive to maintain. You eliminate Windoze, and you eliminate the reason for what you guys are doing.
        Roger Ramjet
  • I am curious...

    ... and sincerely inexperienced, say ignorant.
    Nevertheless, I'd like to ask the opposed factions to illustrate actual deployments of each one methodology, eventually naming companies and HW/SW resources.
    Wouldn't this be simpler for us, the audience?

    Regards
    rikkytikkytavi@...