Routing scalability has always been a problem in networking research. Now, computer scientists at UC San Diego (UCSD) have developed a new algorithm to improve the routing efficiency of networks. 'Called XL, for approximate link state, the algorithm increases network routing efficiency by suppressing updates from parts of the system -- updates which force connected networks to continuously re-calculate the paths they use in the great matrix of the Internet.' The results presented today at the ACM SIGCOMM Conference show that XL outperforms standard algorithms, in some cases reducing the number of routing messages ten-fold. But read more...
You can see above how "the XL algorithm developed by computer scientists at UC San Diego significantly outperforms standard link-state and distance-vector algorithms, speeding routing in computer and communications networks." (Credit: UCSD) The XL results are shown in green and are clearly better than other algorithms, at least in communication overhead. For your viewing pleasure, here is a link to a larger version of this figure.
This research work has been done at UCSD's Center for Networked Systems (CNS) by the Systems and Networking group. The team was composed of Kirill Levchenko, a Ph.D. student working with three professors, Stefan Savage, Ramamohan Paturi and Geoffrey Voelker.
Let's move to the UCSD news release for an introduction to this routing problem. "'Routing in a static network is trivial,' say the authors. 'But most real networks are dynamic -- network links go up and down -- and thus some nodes need to recalculate their routes in response.' The traditional approach, said Stefan Savage, professor of computer science at UC San Diego, 'is to tell everyone; flood the topology change throughout the network and have each node re-compute its table of best routes -- but that requirement to universally communicate, and to act on each change, is a big problem.'"
With their new routing algorithm, the researchers reduced the overhead of route computation by an order of magnitude. How did they achieve these results? Here are some explanations from the research team. "'Being able to adapt to hardware failures is one of the fundamental characteristics of the Internet,' Levchenko said. 'Our routing algorithm reduces the overhead of route re-computation after a network change, making it possible to support larger networks. The benefits are especially significant when networks are made up of low-power devices of slow links.' The real technical innovation of their work, said another of the authors, Geoffrey M. Voelker, 'is in how information about changes in the network is propagated. The XL routing algorithm propagates only some updates, reducing the number of updates sent through the network.'"
This research work will be presented later today at the ACM SIGCOMM 2008 Conference in Seattle, WA, under the title "XL: An Efficient Network Routing Algorithm." Here are two links to the abstract and to the full paper (PDF format, 12 pages, 261 KB).
And here is the abstract. "In this paper, we present a new link-state routing algorithm called Approximate Link state (XL) aimed at increasing routing efficiency by suppressing updates from parts of the network. We prove that three simple criteria for update propagation are sufficient to guarantee soundness, completeness and bounded optimality for any such algorithm. We show, via simulation, that XL significantly outperforms standard link-state and distance vector algorithms -- in some cases reducing overhead by more than an order of magnitude -- while having negligible impact on path length. Finally, we argue that existing link-state protocols, such as OSPF [an acronym for 'Open Shortest Path First'], can incorporate XL routing in a backwards compatible and incrementally deployable fashion.
Finally, the researchers think there are ways to improve the efficiency of link state routing even further. Here is the last part of the conclusions of their new paper. "In particular, recall that the XL routing algorithm propagates all link cost increase updates, meaning that, on average, it will propagate half of all updates that affect it. It is natural to ask whether this is strictly necessary, or whether a superior algorithm -- one that selectively suppresses link failures -- can scale sub-linearly for typical networks. Whether such an algorithm exists and can guarantee soundness and correctness remains an open problem that we hope to address in future work."
Sources: UC San Diego news release, August 18, 2008; and various websites
You'll find related stories by following the links below.