X
Innovation

0.1 terabit/s Ethernet...

While I was happily peering into holes at CERN and admiring the new data centre that'll be distributing a gigabyte per second of data worldwide, others were showing that technology's more than ready to keep up. On Monday, 100 gigabits a second of continuous data was sent over 4000km of fibre cable from Tampa, Florida to Houston, Texas, and back again.
Written by Rupert Goodwins, Contributor

While I was happily peering into holes at CERN and admiring the new data centre that'll be distributing a gigabyte per second of data worldwide, others were showing that technology's more than ready to keep up. On Monday, 100 gigabits a second of continuous data was sent over 4000km of fibre cable from Tampa, Florida to Houston, Texas, and back again. Poor data.

The new technology behind this revolves around a single chip, a packet marshalling engine built from an off-the-shelf Xilinx field programmable gate array (FPGA) configured to run algorithms from the University of California at Santa Clara. This takes the 100 gigabits/second raw data and turns it into ten lots of parallel 10 gigabits/second, which are then modulated onto ten different wavelengths of light. Shove those down a fibre, repeat the process in reverse on reception, and there you have it. Everything - even the chip parallelizing the stream - is off the shelf, except the algorithm.

That wraps each 10Gb packet in special sequencing information which lets the far end of the system compensate for the propagation time differences in the ten channels and reassemble the packets in the right order. At this sort of speed, you get so many packets in flight that it's not efficient to use the normal methods of looking at the packet number in the header and holding back the longer message until any holes are filled. Beyond that, I'd have to go and do more research - in the days I wrote low level network code, 10Mbps was pretty spiffy.

This all goes to prove my collapsing exponential law of communication channels. That says that adding parallel channels starts cheap but gets expensive, while speeding up a serial channel starts expensive but gets cheap. Thus, innovation in communications starts with a serial channel that goes as fast as it can, with extra identical channels added as demand increases. But all those channels cost: it might not cost so much to go from two to four to eight lines, for example but from eight to sixteen to thirty-two?

At that point, all sorts of things go wrong from the price of the cables to the amount of interference each channel picks up from its neighbours. You can fix that - but the cost is horrendous, and by then, as if by magic, the basic technology's advanced enough to mean you can replace your thirty two channels with a single channel that runs sixty four times as fast.

And then you just have to add an extra channel to double the throughput. Off we go again.

We've seen this in PC disks, where parallel SCSI fattened out then got swallowed by a serialised version - now turning into multiport systems - and on PC memory buses. Radio's a bit more peculiar, where the transistion from parallel OFDM to the single ultrawide timed-pulse ultrawideband got hijacked - messily - by the OFDM brigade, but sure as eggs is eggs along comes MIMO to create multiple channels in space and time from single frequencies.

Purists may claim that I'm merely choosing my level of abstraction to bolster a rather shaky argument. I merely point out that this is a very long and honourable tradition.

Editorial standards