MIT research shows the future of datacenter networking

MIT research shows the future of datacenter networking

Summary: High-performance Fastpass technology reduces lag by more than an order of magnitude

SHARE:

MIT researchers are presenting a paper (next month at the annual meeting of the ACM Special Interest Group on Data Communication) on a network management system they are calling “no-wait datacenters." The researchers have experimentally shown that the system can reduce network transmission queue length by over 99 percent.

With testing done in cooperation with Facebook — making use of one of their data center facilities — the researchers were able to show latency reductions that effectively did away with the normal queue. Their report stated that even in times of heaviest traffic, the average latency per request dropped from 3.56 microseconds to .23 microseconds.

MITnews_NoWait_01
Animation showing network traffic directed by Fastpass arbiter

The MIT Fastpass model switches from the standard decentralized networking model, where each node decides on its own when, where, and how to send data, to a centralized model, using what the researchers call an “arbiter," which makes all of the routing decisions.

Special Feature

Next Generation Networks

Next Generation Networks

The rising tides of big data, video, and cloud computing are driving tremendous demand for faster and more efficient networks. We delve into how things like software-defined networks (SDN) and new wireless technologies are enabling business transformation.

Recognizing that this sounds somewhat counterintuitive, Hari Balakrishnan, Fujitsu Professor in Electrical Engineering and Computer Science and one of the paper’s coauthors, said: “It’s not obvious that this is a good idea.” But their testing showed that even with the lag necessary to send the requests through the arbiter, the overall performance of the network could be vastly improved.

Their testing indicated that a single 8-core arbiter machine could handle 2.2 terabits of data per second, which, according to their announcement, equates to 2,000-gigabit connections running at full speed. The belief is that this could scale to a network of as many as 1,000 switches.

The paper’s authors acknowledge that you can’t run out and build  this system in the world’s largest datacenters today, but believe that the model can be used to build a very scalable, centralized system that will potentially deliver a faster, more efficient, and less expensive networking model.

Topics: Data Centers, Networking

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion