X
Business

Distributed computing - the promise unbroken

I asked frequent contributor Roger Ramjet to write a guest blog presenting and defending his idea that distributed Unix makes more sense than my smart display approach. Here's his contribution - read it carefully and I think you'll see we're not so far apart even if he is right to describe distributed computing as the real client-server model.
Written by Paul Murphy, Contributor

I asked frequent contributor Roger Ramjet to write a guest blog presenting and defending his idea that distributed Unix makes more sense than my smart display approach. Here's his contribution - read it carefully and I think you'll see we're not so far apart even if he is right to describe distributed computing as the real client-server model.

In the beginning, there were expensive mainframes with dumb terminals. Then evolution produced not-quite-so-expensive UNIX computers and mini and micro "frames", and in the end inexpensive PCs crawled out upon the land. Companies purchased these computers depending on their needs, and were always looking to lower their costs. My own philosophy on the next IT paradigm starts with strong rules with few exceptions. What they ended up with were islands of functionality where mainframes, UNIX and PCs each had their own exclusive domains.

Then came the network. It was a way to connect all of the islands together and leverage the strengths of all computers connected to it. It had great promise and companies like Sun recognized this fact in their "The Network IS the computer" mantra.

UNIX took to the network very quickly, while mainframes and PCs lagged WAY behind. UNIX computers began popping up in all shapes and sizes (and prices). The mainframe-like UNIX server "box" was soon joined by smaller desktop/deskside units. These UNIX "workstations" had all of the abilities of their larger server brethren albeit with less capacity. So an idea was born where processing would be done across the network, where UNIX servers "on the platform" would do the heavy lifting, while the UNIX workstations would manage the requests and post-process the results. This is the genesis of the client/server philosophy and the distributed computing model.

In my own "perfect world" this reality would have been joined by both the mainframe and PC camps, where you would be able to mix and match which computers to use for whatever purpose (see DCE). But that never happened. Fishkill had exclusive control over the mainframes, and their networking offerings really sucked. Redmond had exclusive control over the PC world and they took a long time to boost capability. Meanwhile, shared 10BaseT networks, UNIX forking/infighting and $75,000/seat 40Mhz UNIX workstations ($30,000 for a 21", 40Mhz, 16MB, Sun IPX SPARCII+ - Murph) diminished the allure of client/server.

Today we have 10gigE networks and a "free" Linux/Solaris (common) OS that runs on cheap PCs (and huge servers) yet client/server is all but dead. Just when the philosophy makes the most sense, people are sick and tired of waiting. Mounting file systems over the network via NFS certainly sucked 10 years ago with shared 10BaseT, so we went and created a whole new storage network (SAN). Never mind that NFSv4 over gigE today is reliable and secure and FREE (built-in) as opposed to building a whole new infrastructure of fiber, switches, HBAs and high-priced storage (not to mention additional cost software to handle things like dual-pathing and fancy filesystems).

The way it should be

Murphy is a strong proponent of utilizing thin client/server architecture to create the next IT paradigm. I have reservations about doing this as it can lead to:

  • Vendor lock-in - SunRays require Sun software and Solaris to work. Everyone else has their own thin clients and restrictions.
  • Difficult capacity planning - The old N+1 problem, how many SunRays per server (20) and what do you do when you need one more SunRay connected (#21)?
  • No off-line capabilities - When the network goes down, so does your SunRay.
  • Monoculture - It's not apparent to me how you would use this architecture in a heterogeneous environment with PCs and mainframes. It looks like an island to me.
  • Proprietary configurations - Modifying server failover software requires expertise.

My own philosophy on the next IT paradigm starts with strong rules (standards, processes, procedures) with few exceptions. I have seen too many instances where people start getting accustomed to going around the system to get things done (read "cowboy" sysadmins/appdevelopers) - which leads to a host of problems. With the proper standard configuration and Namespace standards, each *NIX machine in a distributed environment can be set up in the same way. AIX, HP-UX, Solaris and Linux (Unicos, IRIX,Tru64,OSF1,Ultrix,...) machines can co-exist - and client/server can become a reality.

Once this paradigm is implemented, if you need a high-powered UNIX workstation to do CAD/CAM/CAE at your desk - get one. If you only do some e-mail and document writing, use a cheap PC running Linux. If you need to create a 20 billion record database and need high performance - buy the big iron. But you log in the same way, have the same locations for files, leverage tools that work on every box, have a consistent backup policy, etc. etc. etc (the list of advantages is long).

By utilizing the system automounter, you can have load-balanced, highly-available (served) applications and single user account directories (served) that follow you to where you log in - on any machine on the network. By caching your apps locally, you can do your work even when the network is down (although you would need a local user account - a design decision). In order to master this whole thing, you would need to do what I did - read the automount man page. This was the promise of client/server and distributed environments - and it can be realized. All it takes is people that are willing to follow the program. No new fancy technology required.

Editorial standards