X
Tech

Infiniband -- old before its time?

Infiniband, the new storage/server interconnect, has been snubbed by Intel and Microsoft -- but that's no reason to count it out
Written by Rupert Goodwins, Contributor

In its two-year life, Infiniband has had a tempestuous time. Proposed as a solution to maxed-out PCI-based server and storage systems, its prospects were blighted by reverses from two of its most significant early supporters. Microsoft has said that it no longer plans to include Infiniband support in .Net Server, due at the end of this year, and Intel has dropped plans to produce an Infiniband chip. Both remain involved with the Infiniband Trade Association (IBTA), the group of nearly 200 companies responsible for maintaining and propagating the standard, but their attitudes have been seen as ambivalent. Intel says that it still supports the standard, but its resources are better directed elsewhere.

Infiniband is a high performance but complex specification - the most recent revision clocks in at over 1500 pages. It stems from two early initiatives, Future I/O from Compaq, IBM and HP; and Next Generation I/O, from Dell, Hitachi, Intel, Sun and others. In 1999, everyone on both sides got together in the IBTA, producing version 1.0 of the specification in 2000 and 1.0a in 2001.

Unlike the PCI bus, it is designed specifically for data centre applications, connecting hosts and I/O devices through a switched or routed fabric. It's a serial connection with a basic speed of 2.5 Gigabit per second. For faster speeds, four or twelve connections can be effectively paralleled up for the 4X and 12X rates. Infiniband can connect disk arrays, SANs, LANs, servers and clusters together, but only supports point-to-point links: two devices are linked via a high-bandwidth, low-latency channel and communicate commands and data through a message passing scheme. Storage devices can be commanded to transfer data directly between themselves, rather than going through a host, and this third party I/O is an important factor in Infiniband's performance.

Infiniband is closer to a network like Ethernet than a bus like PCI. It can route information through back-up paths if part of the fabric stops working, it has variable packet size and a very flexible message structure that can accommodate single messages up to two gigabytes in length. Connections are limited to ten or so metres with copper, up to 10 kilometres with fibre; although it would be possible to route Infiniband packets further than this, it would loosen the latency figures: Infiniband is best suited for interconnecting cabinets in a data centre or, via fibre, across town.

Each Infiniband installation can contain multiple independent fabrics, each of which can contain thousands of subnodes -- and the subnodes can support tens of thousands of nodes. Everything is closely managed, with the subnet manager being a particularly important part of the system. It controls the routing of the packets, maintains alternative links, configures topology of the fabric and checks performance.

So what about Microsoft and Intel's reticence , when it comes to active support? The storage world is a hotbed of excitement and change at the moment, with a variety of options being touted as the next big thing in data centre and clustered storage. Ethernet -- gigabit or ten gigabit, SCSI over IP -- iSCSI -- and IP over Fibre Channel are all technologies that mesh well with current expertise and experience, something that can't be said for Infiniband.

There are some things that Infiniband does better than any alternative, but both Intel and Microsoft are betting that these won't translate into large sales across the board. Larger data centres and places with very strict latency rules may adopt the technology soon: others may choose to hold off. Also, Intel chose to develop its first Intelliband product around the 1x speed, while most interest has centred on the faster options -- rather than play catch-up, the chip company may have chosen to retire from the scene while the market settles down. Intel is still actively developing the APIs, compliance and performance tests, and co-chairing IBTA.

Currently, products are expected in the first half of next year. Initial applications are expected to be network-addressable storage (NAS) systems, linking multiprocessor servers to storage in an easily scalable configuration. Whether the bigger vision -- of distributed, city-wide data centres using Infiniband across the board -- ever comes about is harder to say, but it's far too soon to write off the technology as a good idea at a bad time.

Editorial standards