X
Tech

Infiniband – Boldly Going Where No Architecture Has Gone Before

Back in 2005 we all knew that Fibre Channel and Ethernet would eventually support transmission rates of 10 Gbit/s and above and now in 2010 that day has pretty much dawned on us. In the excitement of those days what was always a concern was that the host’s I/O bus would need to transmit data at the same rate.
Written by Archie Hendryx Hendryx, Contributor

Back in 2005 we all knew that Fibre Channel and Ethernet would eventually support transmission rates of 10 Gbit/s and above and now in 2010 that day has pretty much dawned on us. In the excitement of those days what was always a concern was that the host’s I/O bus would need to transmit data at the same rate. But with all the advancements of PCI-E, the nature of all parallel buses is that their transmission rate can only be increased to a limited degree so how was this potential barrier ever going to be solved? The solution being penned around at the time was InfiniBand. Not only did it carry a name that seemed straight out of a Star-Trek episode but it also promised a ‘futuristic’ I/O technology which replaced the PCI bus with a serial network. That was five years ago and bar a few financial services companies that run trading systems I hadn’t really seen any significant implementations or developments of the technology that was marketed with the phrase ‘to Infiniband and beyond’. But two weeks ago that suddenly changed.

Before I delve into the latest development of the architecture that’s bold enough to imply ‘infinity’ within its name one should ascertain as to what exactly justifies the ‘infinite’ nature of Infiniband. As with most architectures the devices in Infiniband communicate by means of messages. That communication is transmitted in full duplex via an InfiniBand switch which forwards the data packets to the receiver. Also like Fibre Channel, InfiniBand uses 8b/10b encoding enabling it to package together four or twelve links to produce a high transmission rate in both directions. Using Host Channel Adapters (HCAs) and Target Channel Adapters (TCAs) as the end points, the HCAs act as the bridge between the InfiniBand network and the system bus while the TCAs make the connection between InfiniBand networks and the peripheral devices that are connected via SCSI, Fibre Channel or Ethernet. In other words for SAN and NAS folk that basically means HCAs are the equivalent to PCI bridge chips while the TCAs are in the same vein as HBAs or NICs.

Additionally HCAs carry the ability to be used for not just interprocessor networks, attaching I/O subsystems, but also for multi-protocol switches such as Gbit Ethernet switches. Herein lies the promise of a sound future with Infiniband due to its independence from any particular technology. Indeed the standard is not just limited to the interprocessor network segment, with error handling, routing, prioritizing and the ability to break up messages into packets and reassemble them. Even messages can be a read or write operation, a channel send or receive message, a multicast transmission or even a reversable transaction-based operation. With RMDA existent between the HCA and TCA, rapid transfer rates are also easily produced as the HCA and TCA each allow permission to read or write to the memory of the other. Once that permission is granted write or read location is instantly provided thus enabling the superior performance boost. With such processes, control of information and it’s route occurring at the buslevel, it’s not surprising that the InfiniBand Trade Association view the bus itself as a switch. Add to the equation that InfiniBand uses Internet Protocol Version 6, you’re faced with an almost ‘infinite’ amount of device expansion as well as potential throughput.

So fast forwarding to the end of January 2010 and I finally read headlines such as ‘Voltaire’s Grid Director 4036E delivering 2.72 terabits per second’. At last the promise of Infiniband is beginning to be fulfilled as a product featuring 34 40 Gb/s InfiniBand ports i.e. a collective 2.72 terabits per second proved this was no longer Star Trek talk. With an integrated Ethernet gateway which bridges traffic between Ethernet-based networks via an InfiniBand switch, the Voltaire 4036E is one of many new developments we will soon witness utilsing Infinband to provide unsurpassed performance. With high performance requirements for ERP applications, virtualization and ever growing data warehouses always increasing, converging Fibre Channel and Ethernet with InfiniBand networks into a unified fabric now seems the obvious step forward in terms of scalability. Couple that with the cost savings on switches, Network interface cards, power /cooling, cables and cabinet space and you have a converged network which incorporates an already existent Ethernet infrastructure.

InfiniBand suppliers such as Mellanox and Voltaire may have their work cut for them with regards to marketing their technology in the midst of an emerging 10gigE evolution but by embracing it they may just ensure that Infinband does indeed last the distance of ‘infinity and beyond’.

Editorial standards