IBM puts distributed computing on the grid

Big Blue plans to invest US$4 billion into "computer farms" around the globe that will let businesses gain access to computer resources that aren't physically on site.
Written by Stephen Shankland, Contributor
IBM plans to take a distributed computing concept from the theoretical realm and make it a corporate reality.

"Grid" technology, which distributes computing jobs and databases across numerous servers, has largely been an academic phenomenon. But on Thursday IBM plans to give the idea a corporate twist with its so-called Grid Computing Initiative.

The project aims to create an arrangement in which companies, instead of buying a set amount of processing power--which sometimes goes unused or often overworked--will be able to pay to dip into a much larger pool of computing capacity from Big Blue as they need it.

"We'll make available to the customer the opportunity to gain access to computer resources that aren't physically on site," said Dave Turek, the company's vice president of Linux and emerging technologies. Those computing resources will be able to respond quickly "as customer needs expand or contract".

Grids are clusters of interconnected computers that can collectively tackle large computational problems or provide quicker access to very large bodies of data.

Though a large number of academic grids are already up and running--indeed, IBM recently won contracts to provide two European universities with them--Big Blue hopes customers will pay to use its grid-computing resources.

"We expect the technology to find its way rapidly into more conventional commercial development," Turek said.

IBM's Global Services division plans to build US$4 billion worth of such "computer farms" around the globe, he said.

Grids are a close relative of distributed computing efforts to spread computational tasks among large numbers of computers, the best-known example being the SETI@home effort to search for extraterrestrial communications among radio-telescope signals.

Companies such as Entropia hope to capitalize on distributed-computing technology by paying ordinary Web users for use of their spare computer processing cycles. The companies then sell access to the resulting Internet-based grid to commercial concerns such as genetics researchers, for example.

But distributed computing, like many nascent high-tech ideas people are trying to turn into start-up businesses, has fallen on hard times.

Shunning Entropia's model, IBM prefers to use a smaller number of comparatively centralized servers for its effort--no surprise, given that the profit margins are a lot better on high-end machines than desktop PCs. Sidestepping the financial implications, Turek argues that using servers makes it easier to satisfy concerns surrounding security and privacy.

Big Blue is fostering development of grid technology by supporting the Global Grid Forum, which is working on standardizing some grid technologies, and the open-source Globus research effort to build grids.

IBM rival Sun Microsystems is also hard at work on distributed computing, through its Grid Engine software, and it too hopes a larger community will back the technology.

Last week Sun released the software as an open-source project. With open-source software, anyone may freely use or modify a program, and often a community of volunteer programmers forms to improve the software.

IBM's two recent contract wins for academic grid efforts involve a high-energy physics data-storage facility at Oxford University--one of nine centers for the United Kingdom's National Grid--and a large grid connecting five Dutch universities, Turek said.

Irving Wladawsky-Berger, the man in charge of IBM's Linux and self-healing eLiza efforts, will lead the company's Grid Computing Initiative. Turek said the company wouldn't know for a few months how much money IBM plans to spend on the initiative.

Editorial standards