X
Business

Infiniband trials 'due in the spring'

The first servers kitted out with the high-speed, switched-fabric interconnect should appear in pilot programmes soon, but full integration is still some way off
Written by Matt Loney, Contributor

The first servers to use the Infiniband switched-fabric interconnect are likely to find their way into pilot programmes early in the new year. Members of the Infiniband Trade Association (IBTA) say that widespread availability is due by next summer, but the first implementations are likely to be in expansion cards; PCI cards, for instance, that have an Infiniband port to that several servers can be connected together.

Buyers are likely to have to wait another 12 to 18 months after the Infiniband-enabled PCI cards come out before the technology is built into server motherboards, according to Allyson Klein, marketing programme manager for Intel's advanced component division, which is heavily involved with development of the standard.

When Infiniband does appear on motherboards, it will have a massive effect on the way servers are bought and configured. Currently, most servers are built with the motherboard, storage, expansion and graphics cards in a single box. Adding extra servers to a cluster means adding more of everything, even if it's not necessary. Infiniband will let manufacturers build and sell servers in a modular way so buyers can connect and configure them depending on the job -- rather like hi-fi components.

"At the front end of the server, you'll see Infiniband used in super-dense rack-mounted servers," said Klein. Such servers are commonly called blade servers because each server -- or blade -- is simply a motherboard containing the processor and memory, which slots into a backplane. The Infiniband Trade Association has drawn up specifications for a 3U high rack-mounted server that contains up to seven blades. "With Infiniband, these are hot-swappable," said Klein,"and in theory you can put in different blades from different manufacturers." Doing so will allow servers to share power supplies, expansion cards, storage and even graphics cards.

Infiniband's 2.5Gbit/s bandwidth can be increased by multiplexing the cables together. It will be available in 10 and 30Gbit/s versions. Other speed gains will be made by virtue of the channel-based nature of Infiniband, which means that applications can talk directly to hardware without having to negotiate with the operating system.

Several companies already build servers in a modular fashion: buyers of SGI's Origin 300 can buy processor modules, storage or graphics modules depending on the job. IBM's x360 servers, which went on sale in the UK this week, use a stopgap measure called remote I/O to distance the storage from the processor. And Hewlett Packard is now taking orders for a blade server code-named PowerBar. But all these servers use proprietary technology. Infiniband, which is supported by all the main server manufacturers (Sun, IBM, Compaq, Dell and Hewlett-Packard) is based on open standards.

The IBTA's Compliance and Interoperability working group is currently working on a test suite that will allow manufacturers to test their products against each other. "Anything on the list should work with anything else on the list. That's the intention," said Klein. Early work on interoperability has seen a database cluster up and running with Infiniband products -- including servers, components and management software -- from 24 different companies, she said.

Early adopters in Europe will include BMW and CERN, who are among the 24 corporates worldwide that have signed up to the IBTA IT Programme, which gives them early access to the technology.

More enterprise IT news in Tech Update, ZDNet UK's enterprise channel.

Have your say instantly, and see what others have said. Click on the TalkBack button and go to the ZDNet news forum.

Let the editors know what you think in the Mailroom. And read other letters.

Editorial standards