Or, at least for a long time. So says a ZDNet reader who lighted the keys on his keyboard ablaze to put me in my place regarding one of the more interesting possibilities that could come out of the journey that Sun and Google embarked on yesterday. (point by point breakdown in a second.)
Middleware like Java has proven to be disruptive as well, but its direct effects haven't been felt nearly as much on the desktop as its progenitors would have liked by now. So, when two companies with highly aligned and demonstrated interests in rich computing on thin clients get together, anyone's first instinct should be to zero in on where the best chances for revolution lie. Quite frankly, by the time the announcement was done, I had the same tastes great less filling feeling that many including ZDNet reader Sam Hiser had (yawn). We (the press) dropped everything for this? Based on what was said, I didn't detect the remotest chance of the relationship living up to its potential. But, as I said in that first blog, this relationship probably shouldn't be judged by its cover (the first announcement). There's too much alignment of mutual interests -- mutual technologies, mutual enemies, etc. -- for there not to be more to this relationship than was discussed yesterday.
But to readers like Joel (I'll omit his last name and company until he gives me permission to publish it) who respectfully took the time they did, not just tell me that I've lost my marbles, but also to explain why, I believe a response is deserved. Here's what Joel (whose thoughts echoed those of many others) said, along with my point-for-point reply:
Joel: While the concept of desktop java as "thin client" is intriguing marketing, applications of substance still require real code to run. Real code = real download delays, especially for folks who are in and out of apps on a regular basis. These download delays will be variable, given the nature of shared networks, and variable response speed drives people nuts. Variable page delay is one of the most frustrating parts of the Internet experience, and one that drives people away from busy sites in droves. Fat client applications do not present this behavior unless it is the result of something the user did, like start another application while an application process is in motion. It is one thing to occasionally wait for a news page to download. It's another to be trying to get ready for a meeting, and not be able to finish or print a document because somebody's server on the Internet, or the link to it, happens to be slow or unavailable at the moment.
DB: Java is more interactive than static Web page loads. Bandwidth isn't the problem you make it out to be for a majority of connected workers and homes (residential broadband penetration now at 53 percent, estimated to be at 71 percent by 2010). Verizon's FIOS runs at 15 mbps and let's not forget that the electric companies will soon be in the game. We're just getting started. So, where access/bandwidth is a problem, it's a temporary one for the larger market, at best.
Joel: Putting logic on the server as a way of dealing with download times is one solution, but that just trades download time for runtime latency and server CPU load. It seems that however the Internet is structured, that latency will be variable. It has not made economic sense to build a network that is structurally capable of handling peak load without any latency. That would imply a monstrous amount of wasted bandwidth during most of normal operation, and I cannot see how the general populace would pay for this excess. Society's proven track record has been to focus on "good enough" and "cheaper".
DB: This is a variation on argument that bandwidth is a limited resource (most arguments against thin clients can't let this point go). The limitations are neither technical nor cost based. Just bureaucracy based and workarounds like WiMax will force the regulatorium to finally let the bits flow over the wired infrastructure that's been there for a while (by the way, see how BT is trialing WiMax with 100 users). Regarding host costs, that's a problem most service providers would love to have. That means they have customers that like the service and want more of it. Salesforce.com, one Software as a Service (SaaS) provider probably thought about that problem before opening its doors. Overload doesn't seem to be bother the company though.
Joel: OpenOffice or StarOffice are hardly examples of thin client architecture.
DB: I couldn't agree more. Furthermore, the OpenOffice.org (OO.o) connection had me thoroughly confused. I thought OO.o was open source. That OO.o was involved at all in the negotiations means Sun should be more transparent about OO.o's governance. The early reports (for example this one from ComputerWorld) reported the deal would involve StarOffice -- Sun's non-open source kissing-cousin to OO.o. Not only would have this made more sense from a "What does Sun control?" point of view, but also from a legal perspective since Sun was unable to negotiate protection for OO.o licensees when it inked its We (Sun) won't sue you (Microsoft) for patent infringement over .Net if you don't sue us over StarOffice last year. Distributors of OO.o like Red Hat were left exposed (and I'm not sure what this means for IBM which is including OO.o code in it's Workplace Managed Client offering). Why Google would want to touch that with a ten foot pole much less something shorter, I can't figure out. Nevertheless, neither are thin. That said, if someone were to come up with a slimmed down version of either, the someone that's capable of that is Sun and Google. Ultimately, it doesn't matter what it's called. The disruptive nature of some sort of thin-client based word processing, spreadsheet, and presentation suite from Sun and/or Google won't be the brand name. It'll be the architecture itself as well as the formats (eg: OpenDocument Format) it supports. If for example, someone can offer a thin-client productivity suite with robust support for ODF, the Commonwealth of Massachusetts which recently standardized on that format might be a customer. Presumably, in the best interests of its taxpayers, Massachusetts would be interested in a disruptive architecture that could prove to have a lower total cost of ownership than the one they're using today. This is a particularly good time for Massachusetts to go back to square one since it has to pretty much do that anyway now that it looks like it will be ditching Microsoft Office (there's still time for Office to save itself).
Joel: They (StarOffice, OO.o) are just different fat clients, using Java as the intermediate platform. They are, however, a decent example of "functionality takes code volume, no matter which tool you build with". Imagine the bandwidth required to download OpenOffice as a Java applet, at a speed that users would not be left tapping their fingers for minutes at a time many times a day (no, real users do not leave their browsers open all day, so the supposed cheat of "well, that would only happen to them once in the morning" is nonsense in my book).
DB: Does it matter whether it's one of those two or something else that's based on a document format standard like ODF? Really, it's the format that's the issue. Not the name of the software that complies with it. You can' t tell me that a basic word processing application that saves to ODF and that does the 10/90 rule (has 10 percent of the features that 90 percent of the people really need) can't work in a pure Java environment. As far as leaving browser windows open, first a Java window needn't be a typical Java window. Second, I leave my productivity applications and multiple browser windows open all day on my system. So, technically, I don't believe this to be an issue. Also, with Java, you probably can preserve application state. This is obviously doable with fat clients (not many applications actually do it). But imagine if the lights go out. With most applications, you might lose everything. If I'm a SaaS using Java, I'm going to one-up my competition with a state-saving feature.
Joel: Then we have the issue of turning the world in to another mainframe farm - if you move enough code to the server that "real applications" run with only a true thin client workstation, then you've got to provide CPU support for the rest of the app's logic centrally - back to the mainframe we go. Then you have people's information held hostage by the application provider, creating "interesting" policy issues in terms of intellectual property management. Would ZDNet put all of its materials (articles being written, industry contacts, etc) on some public service with thin client access, out of its corporate data center control?
DB: I think the success of Salesforce.com pretty much makes all of these points moot. Some SaaS providers (Rightnow for example) give you the option of self-hosting the service. Sure, it's Mainframe 2.0. But the advantages, particularly from the client-side point of view are too numerous to dismiss. If I can access useful Java apps with a cell phone, imagine what I can do with a bare-bones Pentium III with a little memory. Backup and Restore? Salesforce.com's customers don't have that problem. Salesforce.com takes care of it for them. Need to upgrade to the new version of your software? Just hit the refresh button after the new version is released. Viruses? What viruses? I say this fully acknowledging that some applications -- for example certain types of games -- are inappropriate for this type of architecture. As for ZDNet, when our own blog servers were incapable of including an RSS enclosure field in its blog feeds (to support podcasts), we did exactly that (we hosted the podcast related blogs on an external infrastructure that had the support). I'm willing to bet that it wasn't the first time.
Joel: Internet access is not anywhere near pervasive. How many times have you pulled out your laptop on the road somewhere far away from any WiFi signal? Happens to me all the time. How many times are you in an office building that has places with no cell service, and where WiFi is locked down for very good security reasons against casual laptop use. This happens all the time to anyone who consults or moves between offices. How many times has Internet access been problematic when work needs to get done? With a "no local code" client world, the work stops until your local cable company or telco conglomerate decides to fix the problem, and in a corporate setting you're stopping a whole organization with the failure, not one person at a time.
DB: Again, a rehash on the pervasiveness of high bandwidth connectivity. It's a non-issue and you're fooling yourself into believing that it is. Why do you think Google wants in on lighting up all of San Francisco with a WiFi signal? Do you think that's merely a passing dalliance in philanthropy? To the extent that we're talking about a company with a vested interest in making it's largely thin-cleint based applications work well, everywhere, San Francisco is no fluke. It's a sign of the investments that Google will make in order to ensure that there are no barriers to using all that Google.com has to offer. And, just to reiterate what I said before, it's not like bandwidth/access is the problem you're making it out to be. Nine times out of ten, where there is some sort of bandwidth/access problem, it's either one that's temporary, or the real problem is politics. Either way, market forces will see to their resolution.
Joel: Cost. Then there's the "WiFi everywhere" drumbeat. Well, I can see WiFi of one form or another ending up in most corporate and downtown settings. I don't hear anyone talking about rigging the nation's highways with WiFi.....
DB: Note that WiFi or some other wireless broadband will eventually be available on your local highway (if it isn't already). But, assuming that the applications would be as performance-hamstrung as you say they will be (I'm not agreeing), 99.99 percent of the people don't do the sort of computing that such signal unavailability would be a problem for from a highway anyway. The approach doesn't have to address 100 percent of the market to be viable.
Joel: ... or any of the other places people really go that are off the "office track". Using a cell phone to handle these spots is awful from a bandwidth perspective, and most importantly ridiculously expensive.
DB: As a user of EV-DO, I flat out disagree. People are paying more now for a la carte WiFi access than the cost of all you can eat, $50 per month EV-DO connectivity. I'm using it right now. Not only is the bandwidth, but other points of friction (eg: automatic radio switching and billing infrastructure) are being ameliorated as well. To be fair, there are dead spots. But, here, the rule is more like 99/1 instead of 80/20. You just need to get broadband to 1 percent of the planet's geography that 99 percent of all computing is done from and the business proposition of thin-client computing is unassailable. Even at 50/50, it's unassailable. 50 percent of the world's users is a lot of people and a huge market opportunity.
Joel: This may change, but for now I don't see the pervasiveness argument holding water when connected to the real world.
DB: If you're so convinced, then you should continue computing under the paradigm you're defending. No one is forcing you to enjoy the benefits of thin client computing. But, I think its a mistake not to give them a chance.
Joel: While I think it is possible, and eventually (far, far out) probable that the world will go towards a network model, the communications infrastructure is far from the point in terms of bandwidth or spread to make this feasible for all - and until there aren't significant coverage holes on either axis people will stay with the fat client option that allows them more flexibility and options. Another example of this decision-making behavior is the common practice of laptop user to keep all key documents on their local hard disk, even where there is a corporate file server accessible with a VPN. They just don't want to deal with the access, delay and connection problems and avoiding the network ends up being the solution.
DB: I'll skip the rehash of the bandwidth/access issue since we've covered that. On the hard drive issue, I'm willing to bet that there are a lot of people who, if they stop to think about it, are already storing the content they create on the network. Most all of the documents I create are being saved to the "the cloud" already. Play around with GMail. I can send you an invite if you don't already have one. Start writing a long email, click the save button, and then watch as it auto-saves from that point forward. Connecting to a VPN, mapping drives to some directory on a server? What a pain that is. Those people don't do it for the same reasons I don't. Why should we have to figure out where to store stuff. It should just happen naturally. Don't want to save it to the cloud, you say? Java has an API for USB storage. Why can't a terminal have a USB port. Heck, you could probably store the Java apps on a USB key as well!
There's more the Google-Sun relationship than meets the eye. There's got to be.