X
Business

InfiniBand--old before its time?

InfiniBand, the new storage/server interconnect, has been snubbed by Intel and Microsoft, but Rupert Goodwins thinks that's no reason to count it out.
Written by Rupert Goodwins, Contributor
InfiniBand, the new storage/server interconnect, has been snubbed by Intel and Microsoft--but that's no reason to count it out.

In its two-year life, InfiniBand has had a tempestuous time. Proposed as a solution to maxed-out PCI-based server and storage systems, its prospects were blighted by reverses from two of its most significant early supporters. Microsoft has said that it no longer plans to include InfiniBand support in .Net Server, due at the end of this year, and Intel has dropped plans to produce an InfiniBand chip. Both remain involved with the InfiniBand Trade Association (IBTA), the group of nearly 200 companies responsible for maintaining and propagating the standard, but their attitudes have been seen as ambivalent. Intel says that it still supports the standard, but its resources are better directed elsewhere.

InfiniBand is a high-performance but complex specification--the most recent revision clocks in at more than 1,500 pages. It stems from two early initiatives, Future I/O from Compaq, IBM and HP; and Next Generation I/O, from Dell, Hitachi, Intel, Sun and others. In 1999, everyone on both sides got together in the IBTA, producing version 1.0 of the specification in 2000 and 1.0a in 2001.

Unlike the PCI bus, it is designed specifically for data center applications, connecting hosts and I/O devices through a switched or routed fabric. It's a serial connection with a basic speed of 2.5 Gigabit per second. For faster speeds, four or twelve connections can be effectively paralleled up for the 4x and 12x rates. InfiniBand can connect disk arrays, SANs, LANs, servers, and clusters together, but only supports point-to-point links: Two devices are linked via a high-bandwidth, low-latency channel and communicate commands, and data through a message passing scheme. Storage devices can be commanded to transfer data directly between themselves, rather than going through a host, and this third-party I/O is an important factor in InfiniBand's performance.

InfiniBand is closer to a network like Ethernet than a bus like PCI. It can route information through back-up paths if part of the fabric stops working. It has variable packet size and a very flexible message structure that can accommodate single messages up to 2GB in length. Connections are limited to 10 or so meters with copper, and up to 10 kilometers with fibre. Although it would be possible to route InfiniBand packets further than this, it would loosen the latency figures. InfiniBand is best suited for interconnecting cabinets in a data center or, via fibre, across town.

Each InfiniBand installation can contain multiple independent fabrics, each of which can contain thousands of subnodes--and the subnodes can support tens of thousands of nodes. Everything is closely managed, with the subnet manager being a particularly important part of the system. It controls the routing of the packets, maintains alternative links, configures topology of the fabric and checks performance.

So what about Microsoft's and Intel's reticence when it comes to active support? The storage world is a hotbed of excitement and change at the moment, with a variety of options being touted as the next big thing in data center and clustered storage. Ethernet (Gigabit or ten Gigabit), SCSI over IP (iSCSI), and IP over Fibre Channel are all technologies that mesh well with current expertise and experience, something that can't be said for InfiniBand.

There are some things that InfiniBand does better than any alternative, but both Intel and Microsoft are betting that these won't translate into large sales across the board. Larger data centers and places with very strict latency rules may adopt the technology soon; others may choose to hold off. Also, Intel chose to develop its first Intelliband product around the 1x speed, while most interest has centered on the faster options--rather than play catch-up, the chip company may have chosen to retire from the scene while the market settles down. Intel is still actively developing the APIs, compliance and performance tests, and co-chairing IBTA.

Currently, products are expected in the first half of next year. Initial applications are expected to be network-attached storage (NAS) systems, linking multiprocessor servers to storage in an easily scalable configuration. Whether the bigger vision--of distributed, citywide data centers using InfiniBand across the board--ever comes about is harder to say, but it's far too soon to write off the technology as a good idea at a bad time.

Do you think InfiniBand technology is a good idea? TalkBack below or e-mail us with your thoughts.

Editorial standards