X
Business

Fixing the unfairness of TCP congestion control

Bob Briscoe (Chief researcher at the BT Network Research Centre) is on a mission to tackle one of the biggest problems facing the Internet.  He wants the world to know that TCP (Transmission Control Protocol) congestion control is fundamentally broken and he has a proposal for the IETF to fix the root cause of the problem.
Written by George Ou, Contributor
bobbriscoe.jpg
Bob Briscoe (Chief researcher at the BT Network Research Centre) is on a mission to tackle one of the biggest problems facing the Internet.  He wants the world to know that TCP (Transmission Control Protocol) congestion control is fundamentally broken and he has a proposal for the IETF to fix the root cause of the problem.

The Internet faced its first congestion crisis in 1986 when too much network traffic caused a series of Internet meltdowns when everything slowed to a crawl.  Today's problem is more subtle and lesser known since the network still appears to be working correctly and fairly.  But underneath that facade and illusion of fairness, a very small percentage of users hog most of the Internet's capacity suffocating all other users and applications.  

Solving the first Internet meltdown crisis

vanjacobsonparc.jpg
In October of 1986, the Internet began to experience a serious of "congestion collapses".  So many computers were piling their traffic on to the network at the same time that the network came to a grinding halt and no one got any meaningful throughput.  By mid 1987, computer scientist Van Jacobson who is one of the prime contributors to the TCP/IP stack created a client-side patch for TCP that saved the day.  Every computer on the Internet - roughly 30,000 in those days - was quickly patched by their system administrators.

Jacobson's TCP stack patch worked by causing a computer to cut the flow rate of its TCP stream in half as soon as it detects any packet loss.  Packets are lost whenever the routers relaying them receive more packets than they can forward and the router begins to randomly drop packets across the board.  But whenever a computer sees an acknowledgement that its packet arrived successfully, it quickly and continually increases its flow rate with every acknowledgement until it experiences another packet drop at which time it cuts its throughput in half again.  This became known as the AIMD (Additive Increase Multiplicative Decrease) algorithm where the sending computer is constantly probing for the maximum allowable bandwidth by repeatedly increasing throughput until it crosses a line and gets knocked down.

Jacobson's AIMD algorithm also allowed a new TCP stream to open up and quickly rise to equilibrium where it attains the same flow rate as all other TCP streams.  Conversely when a TCP stream ended transmission, the extra bandwidth freed up would be evenly distributed amongst the remaining streams.  Van Jacobson's patch was so successful that it became a part of the TCP standards and it hasn't fundamentally changed for over 20 years and according to Bob Briscoe, Jacobson's algorithm is the "fifth most cited academic paper in all of computer science".

Under Jacobson's algorithm which sought out to balance the flow rate (throughput) of each TCP stream, the system was more or less fair to everyone who wanted to use the network so long as everyone used an equal number of TCP streams.  Since people typically used one TCP stream at a time and people had limited usage on those time-sharing computers in the 1980s, Jacobson's algorithm was adequate for the problems of that era.  While it was possible for someone to open two FTP downloads or uploads at a time and get double the total throughput than anyone else, this wasn't a big problem when applications and operating systems were mostly limited to text and computers were limited to academic and large corporate institutions.  But as time went on and as the number of applications and users grew, it was only a matter of time before the fairness of the system would be exploited.

<Next page - Exploiting Jacobson's TCP algorithm>

Exploiting Jacobson's TCP algorithm

While Jacobson's algorithm was suitable for the 1980s, cracks began to appear a decade later.  By 1999, the first P2P (peer-to-peer) application called Swarmcast began to blatantly exploit Jacobson's TCP congestion control mechanism.  Using a technique called "parallel incremental downloading", Swarmcast could grab a much larger share of the pie at the expense of others by exploiting the multi-stream and persistence loophole.  These two loopholes would be used by every P2P application since.

Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link.  Since all networks have a bottleneck somewhere, a small percentage of Internet users utilizing P2P can hog the vast majority of resources at the expense of other users.  The following diagram illustrates the multi-stream exploit in action where User A hogs more and more bandwidth over User B by opening more and more TCP streams.  The large light green cutaway pipe represents a congested network link with finite capacity.

TCP multi-stream bandwidth hogging exploit

The other major loophole in Jacobson's algorithm is the persistence advantage of P2P applications where P2P applications can get another order of magnitude advantage by continuously using the network 24x7.  The diagram below shows what happens when an application like BitTorrent uses the network continuously.  I wrote about this last month and presented a similar chart on Capitol Hill.

bandwidth-hogs-chart.png

By combining these two loopholes, an application using 10 times as many TCP streams while being 10 times more persistent than other applications can get a 100 boost over other users when contending for network resources.

With millions of consumers on the Internet today with an insatiable appetite for multi-gigabyte videos, the Internet is facing its second congestion crisis.  While the network isn't completely melting down, it's completely unfair because fewer than 10% of all Internet users using P2P hogs roughly 75% of all network traffic at the expense of all other Internet users.  Even in a country like Japan which has the most per-user broadband capacity in the world, P2P applications have managed to turn Japan's 100 Mbps per home fiber network in to a big traffic jam.  The problem has gotten so severe in Japan that the nations ISPs in conjunction with their Government have agreed to ban P2P users who are trafficking copyrighted content.

Source: Ministry of Internal Affairs and Communications, Haruka Saito, Counselor for Telecom Policy, Embassy of Japan

<Next page - The politicization of an engineering problem>

The politicization of an engineering problem

Despite the undeniable truth that Jacobson's TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred.  Groups like the Free Press and Vuze (a company that relies on P2P) files FCC complaints against ISPs (Internet Service Providers) like Comcast that try to mitigate the damage caused by bandwidth hogging P2P applications by throttling P2P.  They wag their fingers that P2P throttling is "protocol discrimination" and that it's a violation of the TCP standards.  They tell us that if anyone slows down a P2P application, then they are somehow violating someone's right to free speech and impinging on their civil rights.

They tell us that reining in bandwidth hogs is actually the ISP's way of killing the video distribution competition.  They tell us that P2P isn't really a bandwidth hog and that P2P users are merely operating within their contracted peak bitrates.  Never mind the fact that no network can ever support continuous peak throughput for anyone and that resources are always shared, they tell us to just throw more money and bandwidth at the problem.  They continue to espouse the virtues of P2P applications as "efficient" but what they don't tell us is that "efficient" means efficiency in bandwidth hogging.  They also don't tell us that P2P is efficient at offloading the costs of video distribution to someone else.

The Free Press and the EFF publicly tells us that ISPs should randomly drop packets across the board which translates to letting the bandwidth hogs take as much as they want us to believe that this is "fair".  Then they told us that if that if bandwidth hogging was a real problem, then the ISPs should adopt metered Internet access rather than try to rein in the bandwidth hogs.  Even after I criticized them that their proposal for a metered Internet was ludicrous, they continued to espouse the virtues of metered Internet service in public debates (registration required).  But despite all the political rhetoric, the reality is that the ISPs are merely using the cheapest and most practical tools available to them to achieve a little more fairness and that this is really an engineering problem.  

Dismantling the dogma of flow rate fairness

In an effort to overcome some of the irrational worship of flow rate fairness amongst some in the academic and Internet engineering community, Bob Briscoe presented a paper "Flow rate fairness: Dismantling a religion" to the IETF in July of 2007.  I asked Mr. Briscoe how it was accepted by the IETF and he explained to me that they had a straw poll the day after that presentation who would still define fairness the TCP (Jacobson algorithm) way.  Had this poll been conducted before the presentation, Bob guessed that nearly 100% of the IETF audience would have gone with TCP.  But after the presentation, the straw poll resulted in a stunning 70% undecided, 15% saying TCP was fair, and 15% no longer agreed that TCP was fair.

At subsequent IETF meetings, Bob Briscoe told me that he's continuing to "wear down the priesthood of the old religion" but that the current TCP implementation is "so ingrained it's an uphill struggle".  Briscoe and his research group at BT released a simple problem statement for the IETF titled "Problem statement: We don't have to do fairness ourselves".  With all these efforts, Briscoe is happy about the fact that he started an honest dialog amongst the Internet engineering community.

In a comprehensive article set to be published in the IEEE spectrum this May (I've seen the draft), Briscoe explains that the entire Net Neutrality debate is a misunderstanding and that the lack of fundamental fairness in the TCP standards is root cause of the problem.  He explains that ISPs trying to throttle P2P applications are actually masking the real problem in the TCP standards and that it's perpetuating the illusion that everything is alright and fair.  Briscoe also points out that any kind of protocol-level traffic shaping can easily be mistaken by politicians as anticompetitive behavior.

Briscoe also explains that throttling P2P applications is a poor solution on a technical level because it unnecessarily slows down P2P too much and only results in marginal improvements for other applications.  A better TCP implementation would allow the unattended P2P file transfers to complete just as quickly as an unmanaged network with no throttling or performance caps yet it would allow everyone else's interactive applications to burst whenever they like.  While this might sound too good to be true, it isn't hard to believe once you understand that the goals of P2P file transfers and the goal of interactive applications are not mutually exclusive.  Once you understand Bob Briscoe's proposal, it becomes quickly apparent that it's a win for everyone.

<Next page - Weighted TCP - Achieving real fairness>

Weighted TCP - Achieving real fairness

Bob Briscoe's short-term solution is to fix the existing TCP implementation that uses Jacobson's 20+ year old AIMD algorithm.  That means the client side implementation of TCP that hasn't fundamentally changed since 1987 will have to be changed again and users will need to update their TCP stack.  The following diagram is my interpretation of how Briscoe's weighted TCP implementation would neutralize the multi-stream loophole.

Weighted TCP versus normal TCP congestion control

Under Jacobson's algorithm, TCP currently gives the user with 11 opened TCP streams 11 times more bandwidth than the user who only uses one TCP stream.  Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens.  This is accomplished by the single-stream application tagging its TCP stream at a higher weight than a multi-stream application.  TCP streams with higher weight values won't be slowed as much by the weighted TCP stack whereas TCP streams with smaller weight values will be slowed more drastically.

Since interactive applications have a finite amount of data to transfer, the sooner it's transferred over the network the sooner it gets out of the way.  So if you're downloading a webpage or sending an email attachment, it takes more more resources whether that file is transferred faster or slower since that webpage or email is fixed in size.  Background P2P applications like BitTorrent will experience a more drastic but shorter-duration cut in throughput but the overall time it takes to complete the transfer is unchanged.  This is essentially a win for everyone since the typical web surfer or email user will get blazing responsiveness while the P2P application finishes their transfers in the same amount of time.

It's only natural that interactive applications where a human is waiting and expecting an immediate response should be allowed to burst.  Unattended background file transfer applications like P2P only cares about the total time it takes to transfer a file.  But even if web surfing traffic doubled because the usability is so much better, it would hardly register a few percent increase on the overall amount of Internet traffic.  P2P applications that consume the Lion's share of sheer volume on the Internet would hardly be slowed.

Eventually, Bob Briscoe also wants to better address the persistence loophole and allow normal interactive applications like web browsing and emailing to burst even faster using the ECN (Explicit Congestion Notification) mechanism.  ECN is a far more efficient TCP congestion control mechanism that was incorporated in to the TCP standards in 2001 and it was meant to replace Jacobson's AIMD method.  Since ECN doesn't use packet dropping as a signaling method and because clients can hit optimum flow rates quicker, it's vastly superior to AIMD.  ECN is already implemented in Linux, Windows Vista, and Windows Server 2008 but it's disabled by default since some older routers incorrectly drop ECN marked packets because they didn't properly implement TCP.  At this stage, I'm not entirely clear where and when on the roadmap Briscoe intends to incorporate ECN in to his weighted TCP proposal but I'll write a follow up when I get clarification.  

Closing points and observations

At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence "cheat" advantage by installing a newer TCP implementation.  Briscoe explains that with the right incentives, users will want to use a fair TCP system.  It's not clear to me yet what kind of specific incentives and enforcement Bob Briscoe is proposing so I've attempted to come up with a more detailed incentive scheme of my own.

I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole.  It would be fairly simple to verify whether someone is cheating with an older stack and they would be dropped to much slower connection speeds.  For the vast majority of people who aren't using P2P, they would continue getting the higher connection speeds regardless of whether they update to a weighted TCP stack or not because they never use the multi-stream or persistence loophole to begin with.  For P2P users running a weighted TCP stack, they get to download as much as they like at maximum burst speeds because their TCP implementation will politely back off for short bursts of time when single-stream and non-persistent users are trying to use the network.

This sort of bandwidth policy would create the necessary incentives for P2P users to implement a fairer and more polite TCP mechanism.  Users can opt to continue exploiting the multi-stream and persistence loophole but they're making a choice to live with much slower connection speeds.  Normal users regardless of whether they implement a newer weighted TCP stack will get a much higher and fairer share of bandwidth because the bandwidth hogging P2P users will either be operating at much slower speeds or they'll be much more polite.

This would be a totally voluntary system where the ISP will no longer need to single out any specific protocol for bandwidth hogging so there can't even a hint of impropriety.  But without this fundamental fix in TCP congestion control, ISP have no choice but to specifically target P2P applications since those are undeniably the applications that hog the network.  Ultimately, it is in everyone's best interest to hear out Bob Briscoe's proposals.  At the very least, I think we can all agree that the current system is broken and that we need a TCP implementation that treats individual users and not individual flows equally.

<Return to top>

Editorial standards