The thousand-fold game of leapfrog

The supercomputers of tomorrow seem unthinkably powerful, but it's really business as usual

Back in the early 1980s, technologists sighed over "3M" computers — workstations that combined a megabyte of memory, a megapixel display and a megaflop of processing power. The original IBM PC fell far short of those lofty ideals, with Unix-based boxes capable of achieving the specification costing as much as a large car. One could dream.

3G computers arrived fifteen years later with little fanfare. It is impossible these days to buy a mainstream computer today with anything as slow as a gigaflop processor or as parsimonious as a gigabyte disk drive, and a gigabyte of RAM is unexceptional.

3T is next. Intel claims that twenty of its dual-core Montecito blend Itanium 2s will provide a teraflop — at a very similar equivalent cost to that of the first 3M computers in 1980. More prosaically, the virtual computer that the distributed computing project Seti@Home creates out of spare processing power already clocks in at around 15 teraflops. A terabyte of hard disk is affordable enough today that people use four 250GB disks for personal video recording — of all the technologies, storage is advancing the fastest.

Right on cue, the first components of 3P are appearing. Next year, says EMC, its Symmetrix DMX-3 storage system will be able to manage a petabyte. The petaflop is trickier to call. IBM and Cray have talked about producing systems between now and 2010: memory speed, rather than raw processor power, is the problem. Meanwhile, the Japanese are planning a 10 petaflop system by the end of the decade.

There is no doubt that we will continue on this path, for as long as we can think of things to do with ever more potent computers. With no shortage of real problems calling for more calculating power — climate change, artificial intelligence and genetic modelling all have claims on our attention — it is gratifying that our dreams of the past have such a habit of coming true.