X
Home & Office

The future of the Internet

Link computers at four major research institutions via an ultra-high-speed broadband network, and you might get a glimpse of the real future of the Internet.
Written by Dana Coffield, Contributor
The last thing the US needs is another underused high-speed test bed network designed to look for the next great Internet application, right?

But link computers at four major research institutions via an ultra-high-speed broadband network, and you might get a glimpse of the real future of the Internet.

The National Science Foundation earlier this month announced the Distributed Terascale Facility (DTF), a US$53 million, three-year project linking computing power at four major research institutions via 40-gigabit-per-second pipe provided by Qwest Communications International. IBM will contribute geographically distributed Linux servers, and Intel will contribute its powerful Itanium family of processors.

The idea is to prove the commercial and scientific viability of a virtual machine room, or computing facility, that lets researchers tap processing power in many locations for work on data-intensive problems, such as climate, biology, genome, protein or combustion modeling. "This is the supercharger," says Wesley Kaplow, chief technology officer of Qwest's government systems division.

"There's nothing particularly experimental in lighting up four 10-Gbps channels," Kaplow says. "And this issue really isn't whether you can make the facility. The pieces are there, the computer clusters are there, the networking technology and the bandwidth are no longer just theory. The time has come to put all the pieces together."

Big experiments generate tremendous amounts of data that, as numbers on a page, aren't meaningful. The DTF, however, should allow very high-resolution visualisations of those experiments. A researcher could create a model of a storm, walk into it, take slices out of it and request the computation proceed in a different direction.

At the enterprise level, the DTF may help prove that as the cost of processing and bandwidth drop, it may be more efficient to harness corporate computing power in two locations to work on a single manufacturing, design or rendering problem than it is to fly tapes of the data back and forth.

"We effectively are improving on the Internet's elimination of distance and time barriers by making shared access to massive data - whether it's output from a radio telescope or scientific computer simulations - a routine endeavor," says Dan Reed, director of the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. The network will link Reed's lab, Argonne National Laboratory, the California Institute of Technology and the San Diego Supercomputer Center at the University of California.

Not long ago, observers lamented the fact that the NSF's very high-performance Backbone Network Service, the University Corporation for Advanced Internet Development's Abilene project and the Defense Advanced Research Projects Agency's SuperNet failed to spawn much new, advanced Net apps or a meaningful transfer of technology to the private sector. But Kaplow says the DTF is different.

"The reason this was appealing to Qwest was that it wasn't a network in search of a mission, or computers searching for a network," he says. "It's where a network, computer, middleware, software and applications all come together."

 

Editorial standards