ZDNet Reader: Fat Clients are forever

ZDNet Reader: Fat Clients are forever

Summary: Or, at least for a long time.  So says a ZDNet reader who lighted the keys on his keyboard ablaze to put me in my place regarding one of the more interesting possibilities that could come out of the journey that Sun and Google embarked on yesterday.

TOPICS: Open Source

Or, at least for a long time.  So says a ZDNet reader who lighted the keys on his keyboard ablaze to put me in my place regarding one of the more interesting possibilities that could come out of the journey that Sun and Google embarked on yesterday.  (point by point breakdown in a second.)

For the record, as can be imputed from the part in my blog that talks about how the press conference was taking place as I was getting ready to press the publish button on that blog entry, the scenario I described where Google takes the J in AJAX (Asynchronous Javascript + XML) one step further and starts using Sun's Java to deliver apps that are even richer than the ones it is already delivering is pure speculation on where a relationship between the two companies can lead. To me, that is the most exciting of the possibilities not because I prefer one vendor over another (as some accused me of) but more because of the wave of innovation that invariably follows a well-penetrating disruptive technology into the marketplace. PCs were such a technology, followed by the client/server revolution (after a pit stop at local area networks) and then the Internet. 

Middleware like Java has proven to be disruptive as well, but its direct effects haven't been felt nearly as much on the desktop as its progenitors would have liked by now. So, when two companies with highly aligned and demonstrated interests in rich computing on thin clients get together, anyone's first instinct should be to zero in on where the best chances for revolution lie.  Quite frankly, by the time the announcement was done, I had the same tastes great less filling feeling that many including ZDNet reader Sam Hiser had (yawn).  We (the press) dropped everything for this? Based on what was said, I didn't detect the remotest chance of the relationship living up to its potential.  But, as I said in that first blog, this relationship probably shouldn't be judged by its cover (the first announcement).  There's too much alignment of mutual interests -- mutual technologies, mutual enemies, etc. -- for there not to be more to this relationship than was discussed yesterday. 

But to readers like Joel (I'll omit his last name and company until he gives me permission to publish it) who respectfully took the time they did, not just tell me that I've lost my marbles, but also to explain why, I believe a response is deserved.  Here's what Joel (whose thoughts echoed those of many others) said, along with my point-for-point reply:

Joel: While the concept of desktop java as "thin client" is intriguing marketing, applications of substance still require real code to run. Real code = real download delays, especially for folks who are in and out of apps on a regular basis. These download delays will be variable, given the nature of shared networks, and variable response speed drives people nuts. Variable page delay is one of the most frustrating parts of the Internet experience, and one that drives people away from busy sites in droves. Fat client applications do not present this behavior unless it is the result of something the user did, like start another application while an application process is in motion. It is one thing to occasionally wait for a news page to download. It's another to be trying to get ready for a meeting, and not be able to finish or print a document because somebody's server on the Internet, or the link to it, happens to be slow or unavailable at the moment. 

DB:  Java is more interactive than static Web page loads. Bandwidth isn't the problem you make it out to be for a majority of connected workers and homes (residential broadband penetration now at 53 percent, estimated to be at 71 percent by 2010).  Verizon's FIOS runs at 15 mbps and let's not forget that the electric companies will soon be in the game.  We're just getting started.  So, where access/bandwidth is a problem, it's a temporary one for the larger market, at best.

Joel: Putting logic on the server as a way of dealing with download times is one solution, but that just trades download time for runtime latency and server CPU load. It seems that however the Internet is structured, that latency will be variable. It has not made economic sense to build a network that is structurally capable of handling peak load without any latency. That would imply a monstrous amount of wasted bandwidth during most of normal operation, and I cannot see how the general populace would pay for this excess. Society's proven track record has been to focus on "good enough" and "cheaper".

DB: This is a variation on argument that bandwidth is a limited resource (most arguments against thin clients can't let this point go).  The limitations are neither technical nor cost based. Just bureaucracy based and workarounds like WiMax will force the regulatorium to finally let the bits flow over the wired infrastructure that's been there for a while (by the way, see how BT is trialing WiMax with 100 users). Regarding host costs, that's a problem most service providers would love to have.  That means they have customers that like the service and want more of it.  Salesforce.com, one Software as a Service (SaaS) provider probably thought about that problem before opening its doors.  Overload doesn't seem to be bother the company though.

Joel: OpenOffice or StarOffice are hardly examples of thin client architecture.

DB: I couldn't agree more. Furthermore, the OpenOffice.org (OO.o) connection had me thoroughly confused.  I thought OO.o was open source.  That OO.o was involved at all in the negotiations means Sun should be more transparent about OO.o's governance. The early reports (for example this one from ComputerWorld) reported the deal would involve StarOffice -- Sun's non-open source kissing-cousin to OO.o.  Not only would have this made more sense from a "What does Sun control?" point of view, but also from a legal perspective since Sun was unable to negotiate protection for OO.o licensees when it inked its We (Sun) won't sue you (Microsoft) for patent infringement over .Net if you don't sue us over StarOffice last year.  Distributors of OO.o like Red Hat were left exposed (and I'm not sure what this means for IBM which is including OO.o code in it's Workplace Managed Client offering). Why Google would want to touch that with a ten foot pole much less something shorter, I can't figure out.  Nevertheless, neither are thin.  That said, if someone were to come up with a slimmed down version of either, the someone that's capable of that is Sun and Google.  Ultimately, it doesn't matter what it's called.  The disruptive nature of some sort of thin-client based word processing, spreadsheet, and presentation suite from Sun and/or Google won't be the brand name.  It'll be the architecture itself as well as the formats (eg: OpenDocument Format) it supports.  If for example, someone can offer a thin-client productivity suite with robust support for ODF, the Commonwealth of Massachusetts which recently standardized on that format might be a customer.  Presumably, in the best interests of its taxpayers, Massachusetts would be interested in a disruptive architecture that could prove to have a lower total cost of ownership than the one they're using today.  This is a particularly good time for Massachusetts to go back to square one since it has to pretty much do that anyway now that it looks like it will be ditching Microsoft Office (there's still time for Office to save itself).

Joel: They (StarOffice, OO.o) are just different fat clients, using Java as the intermediate platform. They are, however, a decent example of "functionality takes code volume, no matter which tool you build with". Imagine the bandwidth required to download OpenOffice as a Java applet, at a speed that users would not be left tapping their fingers for minutes at a time many times a day (no, real users do not leave their browsers open all day, so the supposed cheat of "well, that would only happen to them once in the morning" is nonsense in my book).

DB: Does it matter whether it's one of those two or something else that's based on a document format standard like ODF?  Really, it's the format that's the issue.  Not the name of the software that complies with it.  You can' t tell me that a basic word processing application that saves to ODF and that does the 10/90 rule (has 10 percent of the features that 90 percent of the people really need) can't work in a pure Java environment.  As far as leaving browser windows open, first a Java window needn't be a typical Java window.  Second, I leave my productivity applications and multiple browser windows open all day on my system.  So, technically, I don't believe this to be an issue.  Also, with Java, you probably can preserve application state.  This is obviously doable with fat clients (not many applications actually do it).  But imagine if the lights go out.  With most applications, you might lose everything.  If I'm a SaaS using Java, I'm going to one-up my competition with a state-saving feature.

Joel: Then we have the issue of turning the world in to another mainframe farm - if you move enough code to the server that "real applications" run with only a true thin client workstation, then you've got to provide CPU support for the rest of the app's logic centrally - back to the mainframe we go. Then you have people's information held hostage by the application provider, creating "interesting" policy issues in terms of intellectual property management. Would ZDNet put all of its materials (articles being written, industry contacts, etc) on some public service with thin client access, out of its corporate data center control?

DB: I think the success of Salesforce.com pretty much makes all of these points moot.  Some SaaS providers (Rightnow for example) give you the option of self-hosting the service.   Sure, it's Mainframe 2.0.   But the advantages, particularly from the client-side point of view are too numerous to dismiss.  If I can access useful Java apps with a cell phone, imagine what I can do with a bare-bones Pentium III with a little memory. Backup and Restore?  Salesforce.com's customers don't have that problem.  Salesforce.com takes care of it for them.  Need to upgrade to the new version of your software?  Just hit the refresh button after the new version is released.  Viruses? What viruses?  I say this fully acknowledging that some applications -- for example certain types of games -- are inappropriate for this type of architecture.  As for ZDNet,  when our own blog servers were incapable of including an RSS enclosure field in its blog feeds (to support podcasts), we did exactly that (we hosted the podcast related blogs on an external infrastructure that had the support).    I'm willing to bet that it wasn't the first time. 

Joel: Internet access is not anywhere near pervasive. How many times have you pulled out your laptop on the road somewhere far away from any WiFi signal? Happens to me all the time. How many times are you in an office building that has places with no cell service, and where WiFi is locked down for very good security reasons against casual laptop use. This happens all the time to anyone who consults or moves between offices. How many times has Internet access been problematic when work needs to get done? With a "no local code" client world, the work stops until your local cable company or telco conglomerate decides to fix the problem, and in a corporate setting you're stopping a whole organization with the failure, not one person at a time.

DB:  Again, a rehash on the pervasiveness of high bandwidth connectivity.  It's a non-issue and you're fooling yourself into believing that it is. Why do you think Google wants in on lighting up all of San Francisco with a WiFi signal?  Do you think that's merely a passing dalliance in philanthropy?  To the extent that we're talking about a company with a vested interest in making it's largely thin-cleint based applications work well, everywhere, San Francisco is no fluke.  It's a sign of the investments that Google will make in order to ensure that there are no barriers to using all that Google.com has to offer.  And, just to reiterate what I said before, it's not like bandwidth/access is the problem you're making it out to be.  Nine times out of ten, where there is some sort of bandwidth/access problem, it's either one that's temporary, or the real problem is politics.  Either way, market forces will see to their resolution.

Joel: Cost. Then there's the "WiFi everywhere" drumbeat. Well, I can see WiFi of one form or another ending up in most corporate and downtown settings. I don't hear anyone talking about rigging the nation's highways with WiFi.....

DB: Note that WiFi or some other wireless broadband will eventually be available on your local highway (if it isn't already).  But, assuming that the applications would be as performance-hamstrung as you say they will be (I'm not agreeing), 99.99 percent of the people don't do the sort of computing that such signal unavailability would be a problem for from a highway anyway.  The approach doesn't have to address 100 percent of the market to be viable.

Joel: ... or any of the other places people really go that are off the "office track". Using a cell phone to handle these spots is awful from a bandwidth perspective, and most importantly ridiculously expensive. 

DB:  As a user of EV-DO, I flat out disagree.  People are paying more now for a la carte WiFi access than the cost of all you can eat, $50 per month EV-DO connectivity.  I'm using it right now.  Not only is the bandwidth, but other points of friction (eg: automatic radio switching and billing infrastructure) are being ameliorated as well.  To be fair, there are dead spots. But, here, the rule is more like 99/1 instead of 80/20.   You just need to get broadband to 1 percent of the planet's geography that 99 percent of all computing is done from and the business proposition of thin-client computing is unassailable.  Even at 50/50, it's unassailable. 50 percent of the world's users is a lot of people and a huge market opportunity.

Joel:  This may change, but for now I don't see the pervasiveness argument holding water when connected to the real world.

DB: If you're so convinced, then you should continue computing under the paradigm you're defending.  No one is forcing you to enjoy the benefits of thin client computing.  But, I think its a mistake not to give them a chance.

Joel: While I think it is possible, and eventually (far, far out) probable that the world will go towards a network model, the communications infrastructure is far from the point in terms of bandwidth or spread to make this feasible for all - and until there aren't significant coverage holes on either axis people will stay with the fat client option that allows them more flexibility and options. Another example of this decision-making behavior is the common practice of laptop user to keep all key documents on their local hard disk, even where there is a corporate file server accessible with a VPN. They just don't want to deal with the access, delay and connection problems and avoiding the network ends up being the solution.

DB: I'll skip the rehash of the bandwidth/access issue since we've covered that.  On the hard drive issue,  I'm willing to bet that there are a lot of people who, if they stop to think about it, are already storing the content they create on the network.  Most all of the documents I create are being saved to the "the cloud" already.  Play around with GMail.  I can send you an invite if you don't already have one.  Start writing a long email, click the save button, and then watch as it auto-saves from that point forward.  Connecting to a VPN, mapping drives to some directory on a server?  What a pain that is.  Those people don't do it for the same reasons I don't.  Why should we have to figure out where to store stuff. It should just happen naturally.  Don't want to save it to the cloud, you say?  Java has an API for USB storage.  Why can't a terminal have a USB port.   Heck, you could probably store the Java apps on a USB key as well!

There's more the Google-Sun relationship  than meets the eye.  There's got to be.

Topic: Open Source

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Bandwidth dismissal naive

    Dismissing bandwidth considerations on thin clients is a trip down fantasy land.

    You keep using the example of writing an email. Writing an email is trivial. Start working with spreadhseets, business presentations, sales proposals involving graphics, photoshop files from the marketing department, etc.

    Throw all that in the "cloud" running from thin clients and watch your best network drop to its knees.
    • What percentage of end users...

      OK,let's assume you're right.

      What percentage of users have to deal with super complicated spreadsheets, graphics intensive presentations (creating, not viewing because there are plenty of net-based proof points on the viewing side), and photoshop.

      Be liberal (guess high).

      Subtract that number from 100.

      The remainder (amounting to millions of users) represents an opportunity.

      • Sorry David, but no

        How many companies are going to voluntarily split their IT into two camps? The "road-warriors" using a different infrastructure to the "always connected"? The management overhead and document interworking complexity would be a nightmare.

        Also, you are thinking in typical spoiled-American mode where you think of broadband access as free. Sorry, but in most of the world there are no "$50 all-you-can-eat deals". In Europe the WiFi services typically cost $20 for 24 hours access, much more for shorter connections, and if you DO take out a subscription there is very little roaming between betworks. And you often don't get overlapping service coverage either. An airport gets paid to keep a single wifi service available and keep the competition out.

        So, if your business is entirely in the office, you don't have a mobile sales force or consultants on the road, and you live in a place where WiFi is given away, great you are golden.

        The rest of us live n the real world.

        And remember the good old days of mainframe computing, when it took weeks or months to get a desparately need feature added to the system? Heck, thats why business went out and bought PCs in the first place!!!
        • In David's Defence

          As a former director of a Telco, and as someone who has worked in both the US and Europe, I feel obliged to come to David's defence on the issue of bandwidth for the thin client paradigm.

          David said, more than once I think, that the real problem with access to broadband is not technical, or cost (by which I assume he also meant healthy returns for investors), but regulatory. This is 100% accurate.

          The problems you cite in Europe are a passing phase (and limited to a few countries, as a rule of thumb the further they are from the US going east and south the bigger the problem). That will quickly become a non-issue as Europeans realize that they are closing doors on future productivity by not encouraging open competition in telecom access markets.

          By the way, many companies already split their IT into different communities supporting, for example, many different 'builds' of OS and applications, and hardware configurations, by department, job role, external relationships, and team requirements. Many large organizations could not manage the upgrade cycle without such a sophisticated zoning system (as splitting their platform into 'camps' is often called).
          Stephen Wheeler
          • Thanks for the defense, ++


            Thanks for rising to my defense. Too many people are unaware that the problem is very much regulatory. The reason certain companies like Intel are pushing so hard on unregulated wireless spectrum (eg: WiMAX) is that it does an end run around regulation. Once the free market works its magic in markets that have traditionally been protection rackets, you'll see a lot of change (for example, voila... you now have 50 mbps to your house... strange how you didn't have that before and now you do without ever having seen a truck).

            One other point...(this is the +), the high cost of IT is in lack of interoperability. If say, you have half your people on thin clients, the other half on thick, but they're all saving to ODF (and ODF actually works), the problem is largely solved. As Stephen points out, IT is *already* supporting a variety of "builds." Maybe if the can save a ton of money on one of them(the thin client one), it would be worth their while.
      • Even that's overstating it

        The client end of any applet or application need deal only with user interface issues and the data that the user actually sees, or is directly needed in the user's calculations, or needed by the interface to act upon the data that the user sees.

        Spreadsheets, no matter how big, show a very limited amount of data to a user at a time. Anything done by the user that manipulates entire columns or even hundreds of thousands of records can and should be done on the host. The user will see the results immediately for the 50-100 rows (I'm being liberal here) that are visible in the interface, and it will appear to act on the entire column or all the data because it will. If the user starts to scroll down to verify that, the amount of data needed to keep up with the user is relatively small, especially with broadband.

        Although somebody else said we are heading back to the days of mainframes with this, we must recognize that mainframes had many strengths. The entire idea of a web browser is so mainframe-like that it practically refutes the entire fat client/thin server paradigm in favor of the mainframe paradigm. But it does not carry forward with it the limitations of mainframes.

        Realistically, the biggest problems with mainframes were user interface issues, and possibly cost. The user interface issues are not problems here, and the platforms that will support this technology do not rank up there with mainframes in terms of cost. Even at that, companies such as IBM always had the facts and figures to refute much of the cost issue. Thin client saves so much money in terms of software deployment, keeping applications up to date, security, backup, and so many other things, that, if not for the interface issues and concentration of costs in one area, would have kept the mainframe alive.

        Realistically, this will not be a solution for me to edit videos at home, transfer several GB from my video camera, or render things for DVD. But for those things that require a network in the first place, thin client is the way to go. That may leave out some people and many home user applications, but it can be extremely useful when it comes to communication and collaboration.

        The one thing I see missing from this is a foot in the door for most users. If this starts out for free, it might gain market presence. But if this were done in partnership with ISPs so that it could be included in the cost of connectivity, the fact that it came with my Internet connection would convince me not to by Office.
      • You Exaggerate

        It wouldn't have to be an extremely complicated spreadsheet to present the problems you dismiss. I have broadband at home and work. I have much more bandwidth at home. There are still problems sometimes. Problems that would be unacceptable if the boss wanted answers immediately.
    • Dave is naive in general...

      "Dismissing bandwidth considerations on thin clients is a trip down fantasy land."

      I agree 100%. I have seen 10BaseT and 100BaseT LANs brought to their KNEES with 20%, 30% packet loss or retranmission rates, just because the server was doing an across-the-network backup that jammed its network ports. The rest of the users couldn't get to the server. Whoops! Now if one central collection point in a 100M LAN can cause all of the clients accessing the server a headache, what happens when it's a 1.5M T1 line? Do you know how much an OC line costs? A LOT. Does a company want to be paying for that in order to provide their clients with the same level of connectivity to their server (because it's a thin client to a remote provider) as they would to an in-house server on inexpensive LAN technology? I think not. Let's examine a chart here*:

      OC-1 = 51.85 Mbps
      OC-3 = 155.52 Mbps
      OC-12 = 622.08 Mbps
      OC-24 = 1.244 Gbps
      OC-48 = 2.488 Gbps
      OC-192 = 9.952 Gbps
      OC-255 = 13.21 Gbps

      *Chart copy/pasted from http://www.webopedia.com/TERM/O/OC.html because I never remember the OC speeds...

      Now, I don't know OC line pricing offhand, but I know that my office is paying roughly $500/month for a T1 line.

      So yes, ignoring bandwidth is naive indeed. Thin client computing may work someday, but I don't see have entire applications that need to be downloaded on a per-use basis across a WAN link as being a legitimate possibility.

      Justin James
      • It's late and I'm feeling punchy

        so what I'll say to this is..."across the network backups? Who needs those if your doin' thin clients?!"

        • Server backups

          When you stick all of your eggs in one basket, you'd better make sure that basket is well protected... with the amount of data on a thin client server (all of those users who would normally fill 80GB drives on their clients are now using 40GB on the server), you'd *better* be doing backups, or you're out of a job when you lose data. With that amount of data, tapes are unrealisticly slow & unreliable (I'm not going to go into the technical details of why I don't like tapes, but I can if you really want), really you're only good option is an across the network backup to a SAN or NAS device. Sure, a big shop with beaucoup bucks an have a rocking FiberChannel setup, but most folks dump to another device over their LAN for a server-to-server backup.

          Justin James
          • Duplicate Nets?

            I don't mean to be argumentative. But, I'm lagging behind the point you're making, so I thought I'd point out where I'm not following you. Perhaps my assumptions are different than yours.

            In a thin-client situation, there is nothing to back up on the clients. A SAN should not be communicating on the work network--the one connected to the clients. (Because, if you put your storage on the work net, it's not really a SAN, is it?) Even for a small shop, an extra NIC and wire connecting each server to a separate Storage Net is still pretty cheap (even to a single NAS device, in the case of a very small shop).

            Ergo, in thin-client arrangement, there is no (or "very little") need for across-the-net backups. I could be wrong on this, that's why I'm asking.
          • I probably wasn't too clear...

            I wasn't making a point about network backups in relation to thin computing. And yes, you are right on all points. The point I was trying to make is that bandwidth is not cheap, and that even a mere backup over the network and totally clog a 100Mbps LAN connection. I've seen a network backup hitting 20%, 30% packet re-transmit rates. So what I was trying to get through, is that it is not hard to completely blowout a LAN connection with heavy, sustained traffic, and as such, no one in their right mind would want each and every single one of their clients pegging an OUTSIDE server across a WAN link to download their thin-client-goodies each time they open an application.

            And yeah, your best bet for that backup is to simply do a second NIC on a different subnet, connect to the backup server with a crossover cable etc. :)

            Justin James
  • Thin applets

    Regarding the bloat factor of downloaded applications, keep in mind that a huge proportion of most well-designed applications has to do with exception handling (some estimates run as high as 90%).

    That's the part that used to be programmatically swapped out to keep the memory footprint low, and it's the part that downloaded apps (or even applets) can safely load on demand or hand over to a server to handle.

    The result is similar to Microsoft's "faster boot times" for MSWinXP: the total process still takes a long time, but the user isn't left waiting.
    Yagotta B. Kidding
  • fat client forever

    I think the fat client will be around forever, at least for a lot of people. The difference is it won't be the cutting edge anymore. It will become commodized, which will be good news for open source and bad news for Microsoft.
  • Credibility gap appears once again...

    Dave, I swear, sometimes you are the Jayson Blair of ZDNet.

    Why do you keep conflating OpenOffice with a Java application?

    From the OpenOffice FAQ (http://www.openoffice.org/FAQs/faq-overview.html):

    "OpenOffice.org 2.0 uses Java technology to increase its functionality: Java technology is used for wizards and for the database component; its use here does not affect the licensing of either OpenOffice.org or the Java software."

    And from http://www.openoffice.org/FAQs/faq-source.html#4

    "The source is written in C++ and delivers language-neutral and scriptable functionality, including Java[tm] technology APIs."

    OpenOffice is CLEARLY not a Java application. Your entire series of articles this week is based, in large part, upon the idea that OpenOffice is a Java application, thus showing that a thin client or Java client is now a legitimate alternative to a rich client.

    Show me one major application written 100% Java, fully cross-platform Java at that, that's not in "perpetual beta", that is of a level of quality such that it is usable (ie, crashes as often or less than a standard Windows application), and is so useful that it is the type of application that most people would have open 4 - 8 hours a day at work (IM, email, web browser, office suite, etc.). Show me one. OpenOffice would make the cut, except that only a portion of it is written in Java.

    This is my challenge. Until you can show me a single example, your version of thin client computing is still in the realm of "theoretically possible" and nowhere near "realistically possible".

    Justin James
    • Never conflated...

      I don't believe I ever suggested that OpenOffice was Java based. What I suggested is that both Google and Sun have complimentary investments in thin client computing as well as motive to upset the status quo. And that if there are any two companies that can bring a less-is-more thin client office solution -- one that satisfies a lot of peoples' needs (the 5/95 rule) -- that Sun and Google are those two companies. Who cares what they call it? As long as the documents it creates are interoperable with other software (thick or thin), that's what is important.

      • Certainly implied conflation

        "The stand-still agreement, in no small way, clears the legal path for Google and other Google-like licensees (Yahoo! anyone?) to take StarOffice to market ? Java-based or not ? without fear of legal reprisal from Microsoft." (http://blogs.zdnet.com/BTL/?p=1969)

        "Java Runtime Environment on the desktop gets a life-sustaining shot of vitamin B-12, while OpenOffice-StarOffice might well become the R&D replacement and speed-to-market turbo-charge that Google needs to leap out front in the race to redefine the client computing-as-service experience. Make that mission-critical experience." (http://blogs.zdnet.com/BTL/?p=1970)

        Quotes like this are extremely misleading. Why mention Java and StarOffice/OpenOffice in the same sentence if the two ideas are totally unrelated? The first quote especially makes it seem that StarOffice/OpenOffice is either written in Java, or is being converted to Java.

        Justin James
  • Short Road


    Just a suggestion, but, next time you want to enlighten folk about a subject like this try being more explicit (like; by repeating it a few times) that you are talking about a World that will be different - not just the one aspect that your talkback buddies are concentrating on.

    You also don't help your case by conceding ground - such as; Mainframe 2.0.

    The future of ICT is about services, with all the flexibility that implies. Mainframes are an historical phase of ICT history when all functions development and operational management were centralized. This is not true of the Web Services, Thin Clients, and Application Service Providers (i.e. what Salesforce is, and what Google is becoming) - the future of ICT.

    With a mainframe a department head had to request a new function, and have oodles of patience. With an ASP that same department head will have regular meetings with representatives only too keen to develop a specific function and provide that as a service for increased revenue as soon as possible thankyou!

    A lot of talk about Thick vs. Thin (George Ou, for example, who also put in his two cents in his ZDNet blog) is hot air. But PC defenders don't see it that way because few people are stopping to explain to them that the thin client is merely a component in a bigger future picture. You tried, but I still feel the absence of a cohesive big picture.

    The funny thing is that the CIOs I know understand this stuff. Maybe that says something about me, or maybe it says something about ZDNet readers...

    To be fair, I think the main reason for this is a trap we all fall into from time to time; Relating our home PC experience to business computing issues. In the home, oddly, I see the fat client continuing to dominate. Thes reason for this has nothing to do with performance issues (which are usually far harder to meet than they are for business) or with making thin clients work (even News Corporation, a long-time nay-sayer on Net issues, has been looking at acquiring Net capabilities such as search engines). It is to do with privacy and data ownership.

    Even if I lose all my data through being dim and relying on the longevity of my PCs hard drive without backing up - at least I know who lost it, where it went (if it went anywhere) and what needs to be recovered. I also know that whatever happens to me or my PC my data is secure and I can access it, without charge, at any time.

    Not so with home-based ASP services. When you talk of GMail, and other services, I think about how much of my data is actually stored on Hotmail - some of my folders go back to 1996. This is the part of the story on thin client that needs to be sold more heavily if people are to get the big picture - services take away pain, they make life easy.

    The bandwidth issue is, as you rightly point out, really a non-issue. Nevertheless, it will continue to pop up until someone gets on the record to say that they have used thin clients without seeing this issue. Also, a lot of telecoms regulators need to feel the pressure from the rest of the industry. Just a suggestion, but, writing a couple of stories on these issues would help us all.

    Thanks for fighting the New World Order corner.
    Stephen Wheeler
    • They'll need anti-gravity boots.

      Gravity is difficult to resist. Google is already proving it. I see a lot of passion out there to crack this nut.

  • The issue is: Cost and Interface

    I love the idea, but Microsoft is what 95% of the population and working force use and know.

    Thin client will happen, Microsoft will be right there, and people will have a choice. Use XYZ's interface, or use the the only thing you have ever known, Microsoft.

    Ultimately the thin client prospect will come down to cost. If Google can deploy a thin client approach and make it free, it might work. Users might learn a new interface only if it is free. Otherwise, people will pay for the a Windows interface because they do NOT want to have to learn something new.

    At my place of employment, the 98 to XP move was hated by users. IT loved it, but the users were getting along just fine with 98, and understood the interface, so they hated the move.

    We technocrats forget that the other 99% of computer users don't want a new interface, are terrified of having to learn something new just to accomplish a job, they are happy if it works, and right now, Windows and Office work.

    Most of the time.... :-)