P2P: My favorite stupid fad of 2001

It was Napster that lit the peer-to-peer fire, but Larry Seltzer thinks peer-to-peer networking is anything but new or innovative.
Written by Larry Seltzer, Contributor
Very often you'll see a technology implemented not because there's any good reason to implement it, but simply because it's possible to do so. My favorite recent example of this is peer-to-peer networking applications.

The craze began with Napster, a free, wildly popular service for the mass-fencing of stolen intellectual property was. Stop the presses! Therefore it must make sense for business. The other assumption, an old and tired one by now, is that there will be a way to make money with it.

It's worth noting that peer-to-peer networking is anything but new or innovative. Every version of Windows since Windows for Workgroups (known, at the time, as "Windows for Warehouses" due to its popularity) has had peer networking built into the operating system. Back then (1992, I think) peer-to-peer was an acceptable way to go if you didn't have the money or IT expertise to run a real server-based network. Guess what? Things haven't changed much.

Try to imagine any complex network application in a peer-to-peer environment. If your client system must communicate with peers directly instead of through a server, then it will have to send out multiple copies of data. There are two problems with this: Most less-expensive Internet connections, including cable modems and most DSL, are asymmetric, meaning that upstream communications are slower than downstream. Even on a slow 56K modem, upstream is no more than 33.6K. This is usually OK because they aren't meant to be used for servers, but that's exactly what peer-to-peer asks them to be. If they only needed to transmit data to one server, that would be fine; but they have to transmit the data to all the other peers with an interest in it.

It also bothers me a little that in a peer-to-peer application, the data is usually stored redundantly on multiple systems. This creates a possibility of error-- although it's a fundamental job of the program to prevent that, so you have to give the app some credit. There is also the issue of the waste of redundant storage, but there's nothing cheaper in this world than bits on a disk, so no biggie.

Like I said, it was Napster that lit the peer-to-peer fire, but even there it was a weird idea. It's obvious to me that a central server would be far more efficient for music distribution than Napster's peer-to-peer. I have no doubt that they chose peer-to-peer to try to escape liability for the fact that they were facilitating copyright infringement (and they failed in this attempt). Central administration of the tune database would also let Napster remove the lower quality copies, the completely fake copies, and in general address a lot of problems. I don't see any legitimate advantage to the peer-to-peer approach. Take out nefarious motives like this and gee-whiz angles and there's not much argument left for peer-to-peer.

The most famous example of peer-to-peer networking in a business context is Groove by Groove Networks. Groove is a groupware system wherein work occurs in a "shared space"--a conceptual area data store wherein users can collaborate on documents, drawings, and so on. I haven't worked with Groove, but friends who have say it's a very cool program and I trust their assessment. Though what makes it cool is, at best, only modestly connected to any peer-to-peer characteristics.

The interesting thing about Groove is that, by default, the shared space exists on each user's computer, and Groove synchronizes the changes that occur among them by communicating directly between clients. In this way it is a peer-to-peer application; the conventional method would be to base the shared space on a server and have all of the clients synchronize through it.

There are two interesting arguments for peer-to-peer in this case. If you have a group that's geographically dispersed, they're not likely to be connected at all times to a common server. A corollary to this argument is that mobile workers need to be able to work offline and remotely, even if they normally work on a high-speed network. The second main argument is that small businesses and certain other small groups can't afford a server for such functions.

Try as I might, I just don't see why peer-to-peer is advantageous for dispersed groups or mobile users. The only argument against central servers for such groups is for availability in event of server failure, but dictating your architecture around the possibility of catastrophic failure doesn't make sense to me. As for remote users, you don't need peer-to-peer to accomplish it. Just look at Lotus Notes, the more famous creation of Groove's creator, Ray Ozzie.

I can buy the small business argument, but only to a point. Servers are just plain cheap these days, but it's a fair point that the administration of complex networking product is too much for many small businesses. The answer for such things is to buy the application as a hosted service. Let someone else administer the server. Groove seems to be in this business too.

And in fairness to Groove, it isn't a pure peer-to-peer application; it can use relay servers and other powerful techniques to make the application scale far beyond what would be possible with a pure peer-to-peer application. Groove Networks' Web site doesn't seem to spend as much time on the peer-to-peer bandwagon as on the average review of their products.

Like many fads before it, peer-to-peer is not completely useless. I've set up plenty of peer-to-peer networks for small organizations that I didn't expect to outgrow their small needs, and it served them well. It's cheaper and, if the network is small, less complicated than a "proper" network. But if you're not a very small organization you need to beware of ideas that don't scale, not to mention old wine in new bottles.

What's your experience with peer-to-peer? E-mail us or post your thoughts in our Talkback forum below.

Larry has written software and computer articles since 1983. He has worked for software companies and IT departments, and has managed test labs at National Software Testing Labs, PC Week, and PC Magazine.

Editorial standards