That thin client pie in the sky

That thin client pie in the sky

Summary: David Berlind's article, "And they said 'WebOffice' couldn't be done," paints a wonderful "blue sky" picture of the "not-too-distant" future. But still ...

TOPICS: Tech Industry
David Berlind's article, "And they said 'WebOffice' couldn't be done," paints a wonderful "blue sky" picture of the "not-too-distant" future. But still ... I am skeptical.

The "thin client model" of today has really changed very little from the model originally developed by Sun Microsystems in the early 80s to permit client hardware to remote boot off of expensive hard drives located in some remote server. The main difference is that the local client is a great deal smarter and a great deal less expensive than it once was. In those days, hardware was expensive and bandwidth was cheap. I will pick the fully functional laptop every time. Today, hardware is really cheap and, while bandwidth is cheap by comparison to 1980, as you point out -- it is not always available. And, if it is available, it is not always sufficient for the task at hand. (Nationally, the demand for bandwidth is still expanding exponentially.)

Sure, for the text-based chores you describe (plus an occasional JPEG file), the bandwidth demands are minimal, but what about those compute-intensive chores? Graphics design? Gaming? Further, for every processor-cycle not being consumed by a thick client, that processor-cycle is being consumed on some central server -- plus the added overhead of managing tasks from many clients at once. Even our thin client will spend most of its time in "idle" waiting for us to hit the next key. Why not have that "thin client" doing productive work while it is waiting on me? That is better than sending those bits across the country to be processed elsewhere and then sending them back to me.

Sure, centrally locating data is a real plus (mainly because it shifts our responsibility to take care of our own data to someone else -- who stands to lose a great deal more if they lose our data than if we lose our own) but does it really save us money? Does it save us time? Yes, your lost productivity argument is sound; but, really, how common is it that the dreaded BSoD happens? Don't we need those occasional failures to remind us to take heed of the pitfalls of our unbridled faith in our technology to protect us?

In reality, the average user could buy the least expensive PC available today and pay about the same as they would pay for a thin client and do everything they will need to do. But that is not what people do -- they buy all the bells and whistles because they can. Your blue sky also assumes that there will be standards and not a plethora of competing solutions -- none of which will work together.

Your basic argument could just as easily be applied to television. Do I buy a satellite dish and be responsible for keeping it working and replacing it when the technology changes, or do I sign up for cable TV and then call them when something goes wrong? They absorb the upgrade costs but I am paying a monthly fee for the service. In the end, the financial outlay is probably the same. And whether one is more convenient than the other is in the eye of the beholder.

Don't get me wrong. I love the fact that all of my important correspondence is stored on an Exchange server that someone else backs up every night. I also love the fact that I can get at that information from my BlackBerry -- and that I can browse the web from a geographic area outside of which I rarely travel.

Still, given a choice between my BlackBerry, which gives me a "thin-client-like" experience at 80Kbps, and a full-blown PC at (multiple Mbps) as my only option at work and at play, I will pick the fully functional laptop every time! And so will most of your readers.

It all sounds good -- and there are many applications for which the model makes sense -- but there are many for which it makes no sense because of insufficient bandwidth, insufficient cycles for the task, insufficient flexibility for the user.

Thin clients will always have their place but I doubt they will ever be the predominant model.

Topic: Tech Industry

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • I think that it's your definition of a thin client

    With a different definition of a thin client, the model is a win win. By doing away with a workstation with gigantic, multiple, hard drives and transferring some of that load, along with the ability to download and use software not on the workstation, as well as being able to "cluster" with more powerful computers, you get the best of all worlds.

    If I work in an office and have a client that uses a piece of software I dont have I could use it for the day, or if I have a rendering job that would take a week on my computer, I could do it in minutes. If I have half a terrabyte of files that I rarely use, they too could be kept somewhere other than on my machine, as well as being protected by a backup (which would be a nice use of some other server as well)

    the world is not so black and white. Reducing the amount of power consumed by my machine (times 1000 for a large corporation) plus the attendant software licensing would make a lot of sense for large corporations (and save on those air conditioning bills as well)

    brute force is out===elegant and useful is coming in.
  • Always and Never

    "Thin clients will always have their place but I doubt they will ever be the predominant model."

    As somebody else said in another talkback, "always" and "never" are a very long time, and I'm sure a lot will happen in a few years in terms of bandwidth amount, availability and reliability.
  • More a matter of economics & trust, not tech.

    It seems to me that this is really more a matter of (1) economics and (2) trust, not technology.

    In a large operation you may be able to acquire some savings by centralizing processing internally, but what about smaller shops? Do they utilize an on-demand ASP?

    If so, do you trust an external agency do support it? What if your network connection to them goes down? Are you ready to pay your workers to be idle and explain to your customers why their work isn't done on time?

    What if the ASP goes bankrupt? What happens to your data when they do?

    What if they get hacked? What happens to your company if your proprietary information is stolen and sold to your competitor? Even if you have redress against the provider, and you probably won't, the damage may already be done. If your competitor is first-to-market based on information stolen from your R&D on the ASP's site, then you may well go belly up. You may have been "in the right", but your company is just as dead.

    Right now : bandwidth is insufficient, networking is unreliable, and off-site hosting is untrustworthy. Given this, I can't say I expect thin-client computing to come to much.

  • Clients using file servers is NOT "thin computing"

    For some reason, a huge number of people on ZDNet, both in TalkBacks and in articles (Mr. Berlind, are you reading?) confuse a central file server with "thin computing".

    Mr. Berlind's example situation (a really bad one, at that) is simply using a web browser as a thick client application to access a central file server. Especially in the AJAX world, clients get THICKER not THINNER! For example, I have been doing some writing on Tech Replublic. Their blogging software puts such a heavy demand on my system at home, that it was taking up to thirty seconds for text to appear on the screen. Each keystroke caused my cursor to be the "arrow + hourglass" cursor. Granted, my computer at home is no screamer (Athlon 1900+, 256 MB RAM) but that is what AJAX does. It requires an extraordinaily thick client to run. If I compared that AJAXed system to, say, using MS Word (a "thick client" piece of software) or SSHing to a BSD box and running vi ( a true "thin client" situation), that AJAX system comes out dead last in terms or CPU usage. And after my system does all of the processing for formatting and whatnot, what happens? It stores the data (via HTTP POST) to a central server, which stores that information somewhere, performs a bit of error checking and then makes a few entries into a database table somewhere. If I compare the CPU usage on my system by the AJAX interface, to the CPU usage of the server to file my entry, it sure doesn't look like a "thin client/thick server" situation to me! It looks a heck of a lot closer to "thick client/dumb file server" story.

    Some of the comments to this story are already making this mistake. "Oh, I like the idea of having all of my data on a central server." So do I. This is how everyone except for small businesses and home users have been doing it for DECADES. The fact that most business people stick all of their documents onto their local hard drives is due to a failure of the IT departments to do their jobs properly, for which they should be FIRED. Why are people in marketing sticking their data on the local drive instead of on the server? Anyone who saves data locally and loses it should be fired too, because if their IT department did the job properly, this would not be possible without some hacking going on. The correct setup (at least in a Windows network, which is what most people are using, even if it's *Nix with Samba on the backend) should that there is a policy in place which sets "My Documents" to a network drive. This network drives gets backed up every night, blah blah blah. The only things that go onto the local drive should be the operating system, software that needs to be stored locally (or is too large to carry over the network in a timely fashion), and local settings. That's it.

    Now, once we get away from standard IT best practices, what is "thin client computing"? We're already storing our data on a dumb central server, but doing all of the processing locally. Is AJAX "thin computing"? Not really, since the client needs to be a lot thicker than the server. Is installing Office to a central computer "thin computing" but having it run locally? Not at all. Yet people (Mr. Berlind, for starters) seem to think that storing a Java JAR file or two on a central server, downloading it "on demand" and running it locally is thin computing.

    Thin computing does not occur until the vast majority of the processing occurs on the central server. That is it. Even browsing the web is not "thin computing". Note that a dinky little Linux or BSD server can dole out hundreds or thousands of requests per minutes. I challenge you to have your PC render hundreds or thousands of web pages per minute. Indeed, even a Windows 2003 server can process a hundred complex ASP.Net requests in the amount of time it takes one of its clients to render on the screen one of those requests. I don't call that "thin computing".

    Citrix is "thin computing". Windows Terminal Services/Remote Desktop is "thin computing". WYSE green screens are "thin computing". X Terminals (if the "client" {remember, X Windows has "client" and "server" backwards} is not local) are "thin computing. Note what each one of these systems have in common. They are focused on having the display rendered remotely, then transferred bit-by-bit to the client, which merely replicates what is rendered by the server. All the client does is transfer the user's input directly to the server, which then sends the results of that input to the client, which renders the feedback as a bitmap or text. None of these system require any branching logic or computing or number crunching or whatever by the client. That is thin computing.

    Stop confusing a client/server network with "thin computing". Just stop. Too many ZDNet articles and TalkBacks that I have seen does this. They talk about the wonders of thin computing, like being able to have all of the data in a central repository to be backed up, or have all of my user settings in a central repository to follow me whereever I go, or whatever. I really don't see anyone saying anything that is "thin computing" that does already exist in the current client/server model. The only thing that seems to be changing is that people are leaving systems like NFS and SMB for systems like HTTP. It's a network protocol folks. Wake up. It's pretty irrelevant how the data gets tranferred over the wire, or what metadata is attached or whatever. It does not matter. All that matters is what portion of the processing occurs on the client versus the server, and in all of these siutations people are listing, the client is still doing the heavy lifting.

    Justin James