X
Tech

The end of applications?

Sometimes someone says something at a conference that really knocks me for a loop. Such was the case at the High Performance Computer Architecture Conference last year.
Written by Justin Rattner, Contributor

Sometimes someone says something at a conference that really knocks me for a loop. Such was the case at the High Performance Computer Architecture Conference last year. In typical panel fashion, a group of us were each given a few minutes to state our position on the future of computer architecture.

The panelist were chosen to represent a broad spectrum of architectural views from the traditional (x86) to the more radical (Cell) along with a software viewpoint. ...it becomes harder and harder for developers to build, let alone imagine, applications with dramatically new capabilities. The hardware panelists more or less stuck to their respective party lines, but the software speaker said something that I won’t soon forget, “Since all of the interesting applications have been written, why is that you guys are still inventing new architectures? What IT managers want now is just lower cost hardware and easier to manage systems. That’s what you should be working on!”

Now I like a provocative panelist as much as anyone, but I just couldn’t swallow the line about the end of applications. I’m squarely in the camp that believes that the truly compelling computer applications have yet to be built.

At first I put the applications comment under the same heading as other famously wrong-headed thoughts about computing such as “only six electronic digital computers would be required to satisfy the computing needs of the entire United States” (Howard Aiken) and “there is no reason anyone would want a computer in their home” (Ken Olsen). The more I thought about it, however, the more I began to realize that it was easier than I first thought to reach conclusion that the era of new applications was over. There are at least three factors at work here.

First, virtually all the mundane clerical tasks of the 19th and 20th centuries are now done with computers. Today’s productivity suites, for example, are regularly criticized as bloatware reflecting the fact that developers continue to add features, while not adding to the fundamental utility of the toolset. Databases are enormously more useful than the filing cabinets and card catalogs they replaced, but new releases have less to do with new capabilities and more to do with scalability, manageability, and security.

Second, the human interface has not evolved much beyond what Chuck Thacker’s Alto personal computer and Alan Kay’s Smalltalk windows and browsers demonstrated some thirty years ago. While the fidelity of the graphics interface is much better, most of what we see today is just eye candy.

Third, computer hardware evolves at a rate that is largely governed by Moore’s Law. Ten or fifteen years ago, general purpose performance was improving almost at the same rate as the transistor budgets were increasing. In other words, processor performance doubled every 18 to 24 months just as the number of transistors in a square millimeter of die area doubled in that same time period. For a number of years, this behavior was known as Joy’s Law after Bill Joy of Sun Microsystems, one of the first people to observe the trend. Unfortunately, two-fold performance gains are no longer occurring every two years despite the fact that Moore’s Law continues to hold to that two-year cadence.

With much less than a 2x improvement in processor performance every two years, it becomes harder and harder for developers to build, let alone imagine, applications with dramatically new capabilities. Add to this the fact that other aspects of hardware performance are barely improving at all (e.g. disk latency) and you have plenty of reasons to believe that the applications party was over.

With software not showing much in the way of functional improvements and hardware gains slowing, it is not surprising that some people are willing to declare the end of applications. It also explains why the guidance to architects is to focus on reducing cost and improving security. Why would anyone think otherwise?

I suspect by now a good number of readers are more than anxious to point out that scripting languages, RSS feeds, mash-ups, wikis and so forth are, in fact, the new applications, but I would beg to differ. While most of the current Web technologies provide improvements in the way applications are built and information is shared, they do not represent fundamentally new uses or changes in the nature of the man-machine interface. If we are going to breakthrough to the next level of computing applications, we have to attack the problem at a deeper level and apply dramatically greater amounts of computing power than we have to date.

Just how I see us getting there is the topic for next time. Your thoughts and suggestions are, of course, most welcome. We’ve been on this plateau for too long a time already. I’m less concerned about how we get off of it than I am about how soon we do it.
 

Editorial standards