Fixing the unfairness of TCP congestion control

Fixing the unfairness of TCP congestion control

Summary: Bob Briscoe (Chief researcher at the BT Network Research Centre) is on a mission to tackle one of the biggest problems facing the Internet.  He wants the world to know that TCP (Transmission Control Protocol) congestion control is fundamentally broken and he has a proposal for the IETF to fix the root cause of the problem.

SHARE:
TOPICS: Browser, Apps, Networking
104

Bob Briscoe (Chief researcher at the BT Network Research Centre) is on a mission to tackle one of the biggest problems facing the Internet.  He wants the world to know that TCP (Transmission Control Protocol) congestion control is fundamentally broken and he has a proposal for the IETF to fix the root cause of the problem.

The Internet faced its first congestion crisis in 1986 when too much network traffic caused a series of Internet meltdowns when everything slowed to a crawl.  Today's problem is more subtle and lesser known since the network still appears to be working correctly and fairly.  But underneath that facade and illusion of fairness, a very small percentage of users hog most of the Internet's capacity suffocating all other users and applications.  

Solving the first Internet meltdown crisis

In October of 1986, the Internet began to experience a serious of "congestion collapses".  So many computers were piling their traffic on to the network at the same time that the network came to a grinding halt and no one got any meaningful throughput.  By mid 1987, computer scientist Van Jacobson who is one of the prime contributors to the TCP/IP stack created a client-side patch for TCP that saved the day.  Every computer on the Internet - roughly 30,000 in those days - was quickly patched by their system administrators.

Jacobson's TCP stack patch worked by causing a computer to cut the flow rate of its TCP stream in half as soon as it detects any packet loss.  Packets are lost whenever the routers relaying them receive more packets than they can forward and the router begins to randomly drop packets across the board.  But whenever a computer sees an acknowledgement that its packet arrived successfully, it quickly and continually increases its flow rate with every acknowledgement until it experiences another packet drop at which time it cuts its throughput in half again.  This became known as the AIMD (Additive Increase Multiplicative Decrease) algorithm where the sending computer is constantly probing for the maximum allowable bandwidth by repeatedly increasing throughput until it crosses a line and gets knocked down.

Jacobson's AIMD algorithm also allowed a new TCP stream to open up and quickly rise to equilibrium where it attains the same flow rate as all other TCP streams.  Conversely when a TCP stream ended transmission, the extra bandwidth freed up would be evenly distributed amongst the remaining streams.  Van Jacobson's patch was so successful that it became a part of the TCP standards and it hasn't fundamentally changed for over 20 years and according to Bob Briscoe, Jacobson's algorithm is the "fifth most cited academic paper in all of computer science".

Under Jacobson's algorithm which sought out to balance the flow rate (throughput) of each TCP stream, the system was more or less fair to everyone who wanted to use the network so long as everyone used an equal number of TCP streams.  Since people typically used one TCP stream at a time and people had limited usage on those time-sharing computers in the 1980s, Jacobson's algorithm was adequate for the problems of that era.  While it was possible for someone to open two FTP downloads or uploads at a time and get double the total throughput than anyone else, this wasn't a big problem when applications and operating systems were mostly limited to text and computers were limited to academic and large corporate institutions.  But as time went on and as the number of applications and users grew, it was only a matter of time before the fairness of the system would be exploited.

<Next page - Exploiting Jacobson's TCP algorithm>

Exploiting Jacobson's TCP algorithm

While Jacobson's algorithm was suitable for the 1980s, cracks began to appear a decade later.  By 1999, the first P2P (peer-to-peer) application called Swarmcast began to blatantly exploit Jacobson's TCP congestion control mechanism.  Using a technique called "parallel incremental downloading", Swarmcast could grab a much larger share of the pie at the expense of others by exploiting the multi-stream and persistence loophole.  These two loopholes would be used by every P2P application since.

Simply by opening up 10 to 100 TCP streams, P2P applications can grab 10 to 100 times more bandwidth than a traditional single-stream application under a congested Internet link.  Since all networks have a bottleneck somewhere, a small percentage of Internet users utilizing P2P can hog the vast majority of resources at the expense of other users.  The following diagram illustrates the multi-stream exploit in action where User A hogs more and more bandwidth over User B by opening more and more TCP streams.  The large light green cutaway pipe represents a congested network link with finite capacity.

TCP multi-stream bandwidth hogging exploit

The other major loophole in Jacobson's algorithm is the persistence advantage of P2P applications where P2P applications can get another order of magnitude advantage by continuously using the network 24x7.  The diagram below shows what happens when an application like BitTorrent uses the network continuously.  I wrote about this last month and presented a similar chart on Capitol Hill.

By combining these two loopholes, an application using 10 times as many TCP streams while being 10 times more persistent than other applications can get a 100 boost over other users when contending for network resources.

With millions of consumers on the Internet today with an insatiable appetite for multi-gigabyte videos, the Internet is facing its second congestion crisis.  While the network isn't completely melting down, it's completely unfair because fewer than 10% of all Internet users using P2P hogs roughly 75% of all network traffic at the expense of all other Internet users.  Even in a country like Japan which has the most per-user broadband capacity in the world, P2P applications have managed to turn Japan's 100 Mbps per home fiber network in to a big traffic jam.  The problem has gotten so severe in Japan that the nations ISPs in conjunction with their Government have agreed to ban P2P users who are trafficking copyrighted content.

Source: Ministry of Internal Affairs and Communications, Haruka Saito, Counselor for Telecom Policy, Embassy of Japan

<Next page - The politicization of an engineering problem>

The politicization of an engineering problem

Despite the undeniable truth that Jacobson's TCP congestion avoidance algorithm is fundamentally broken, many academics and now Net Neutrality activists along with their lawyers cling to it as if it were somehow holy and sacred.  Groups like the Free Press and Vuze (a company that relies on P2P) files FCC complaints against ISPs (Internet Service Providers) like Comcast that try to mitigate the damage caused by bandwidth hogging P2P applications by throttling P2P.  They wag their fingers that P2P throttling is "protocol discrimination" and that it's a violation of the TCP standards.  They tell us that if anyone slows down a P2P application, then they are somehow violating someone's right to free speech and impinging on their civil rights.

They tell us that reining in bandwidth hogs is actually the ISP's way of killing the video distribution competition.  They tell us that P2P isn't really a bandwidth hog and that P2P users are merely operating within their contracted peak bitrates.  Never mind the fact that no network can ever support continuous peak throughput for anyone and that resources are always shared, they tell us to just throw more money and bandwidth at the problem.  They continue to espouse the virtues of P2P applications as "efficient" but what they don't tell us is that "efficient" means efficiency in bandwidth hogging.  They also don't tell us that P2P is efficient at offloading the costs of video distribution to someone else.

The Free Press and the EFF publicly tells us that ISPs should randomly drop packets across the board which translates to letting the bandwidth hogs take as much as they want us to believe that this is "fair".  Then they told us that if that if bandwidth hogging was a real problem, then the ISPs should adopt metered Internet access rather than try to rein in the bandwidth hogs.  Even after I criticized them that their proposal for a metered Internet was ludicrous, they continued to espouse the virtues of metered Internet service in public debates (registration required).  But despite all the political rhetoric, the reality is that the ISPs are merely using the cheapest and most practical tools available to them to achieve a little more fairness and that this is really an engineering problem.  

Dismantling the dogma of flow rate fairness

In an effort to overcome some of the irrational worship of flow rate fairness amongst some in the academic and Internet engineering community, Bob Briscoe presented a paper "Flow rate fairness: Dismantling a religion" to the IETF in July of 2007.  I asked Mr. Briscoe how it was accepted by the IETF and he explained to me that they had a straw poll the day after that presentation who would still define fairness the TCP (Jacobson algorithm) way.  Had this poll been conducted before the presentation, Bob guessed that nearly 100% of the IETF audience would have gone with TCP.  But after the presentation, the straw poll resulted in a stunning 70% undecided, 15% saying TCP was fair, and 15% no longer agreed that TCP was fair.

At subsequent IETF meetings, Bob Briscoe told me that he's continuing to "wear down the priesthood of the old religion" but that the current TCP implementation is "so ingrained it's an uphill struggle".  Briscoe and his research group at BT released a simple problem statement for the IETF titled "Problem statement: We don't have to do fairness ourselves".  With all these efforts, Briscoe is happy about the fact that he started an honest dialog amongst the Internet engineering community.

In a comprehensive article set to be published in the IEEE spectrum this May (I've seen the draft), Briscoe explains that the entire Net Neutrality debate is a misunderstanding and that the lack of fundamental fairness in the TCP standards is root cause of the problem.  He explains that ISPs trying to throttle P2P applications are actually masking the real problem in the TCP standards and that it's perpetuating the illusion that everything is alright and fair.  Briscoe also points out that any kind of protocol-level traffic shaping can easily be mistaken by politicians as anticompetitive behavior.

Briscoe also explains that throttling P2P applications is a poor solution on a technical level because it unnecessarily slows down P2P too much and only results in marginal improvements for other applications.  A better TCP implementation would allow the unattended P2P file transfers to complete just as quickly as an unmanaged network with no throttling or performance caps yet it would allow everyone else's interactive applications to burst whenever they like.  While this might sound too good to be true, it isn't hard to believe once you understand that the goals of P2P file transfers and the goal of interactive applications are not mutually exclusive.  Once you understand Bob Briscoe's proposal, it becomes quickly apparent that it's a win for everyone.

<Next page - Weighted TCP - Achieving real fairness>

Weighted TCP - Achieving real fairness

Bob Briscoe's short-term solution is to fix the existing TCP implementation that uses Jacobson's 20+ year old AIMD algorithm.  That means the client side implementation of TCP that hasn't fundamentally changed since 1987 will have to be changed again and users will need to update their TCP stack.  The following diagram is my interpretation of how Briscoe's weighted TCP implementation would neutralize the multi-stream loophole.

Weighted TCP versus normal TCP congestion control

Under Jacobson's algorithm, TCP currently gives the user with 11 opened TCP streams 11 times more bandwidth than the user who only uses one TCP stream.  Under a weighted TCP implementation, both users get the same amount of bandwidth regardless of how many TCP streams each user opens.  This is accomplished by the single-stream application tagging its TCP stream at a higher weight than a multi-stream application.  TCP streams with higher weight values won't be slowed as much by the weighted TCP stack whereas TCP streams with smaller weight values will be slowed more drastically.

Since interactive applications have a finite amount of data to transfer, the sooner it's transferred over the network the sooner it gets out of the way.  So if you're downloading a webpage or sending an email attachment, it takes more more resources whether that file is transferred faster or slower since that webpage or email is fixed in size.  Background P2P applications like BitTorrent will experience a more drastic but shorter-duration cut in throughput but the overall time it takes to complete the transfer is unchanged.  This is essentially a win for everyone since the typical web surfer or email user will get blazing responsiveness while the P2P application finishes their transfers in the same amount of time.

It's only natural that interactive applications where a human is waiting and expecting an immediate response should be allowed to burst.  Unattended background file transfer applications like P2P only cares about the total time it takes to transfer a file.  But even if web surfing traffic doubled because the usability is so much better, it would hardly register a few percent increase on the overall amount of Internet traffic.  P2P applications that consume the Lion's share of sheer volume on the Internet would hardly be slowed.

Eventually, Bob Briscoe also wants to better address the persistence loophole and allow normal interactive applications like web browsing and emailing to burst even faster using the ECN (Explicit Congestion Notification) mechanism.  ECN is a far more efficient TCP congestion control mechanism that was incorporated in to the TCP standards in 2001 and it was meant to replace Jacobson's AIMD method.  Since ECN doesn't use packet dropping as a signaling method and because clients can hit optimum flow rates quicker, it's vastly superior to AIMD.  ECN is already implemented in Linux, Windows Vista, and Windows Server 2008 but it's disabled by default since some older routers incorrectly drop ECN marked packets because they didn't properly implement TCP.  At this stage, I'm not entirely clear where and when on the roadmap Briscoe intends to incorporate ECN in to his weighted TCP proposal but I'll write a follow up when I get clarification.  

Closing points and observations

At first glance, one might wonder what might prompt a P2P user to unilaterally and voluntarily disarm his or her multi-stream and persistence "cheat" advantage by installing a newer TCP implementation.  Briscoe explains that with the right incentives, users will want to use a fair TCP system.  It's not clear to me yet what kind of specific incentives and enforcement Bob Briscoe is proposing so I've attempted to come up with a more detailed incentive scheme of my own.

I could imagine a fairly simple solution where an ISP would cut the broadband connection rate eight times for any P2P user using the older TCP stack to exploit the multi-stream or persistence loophole.  It would be fairly simple to verify whether someone is cheating with an older stack and they would be dropped to much slower connection speeds.  For the vast majority of people who aren't using P2P, they would continue getting the higher connection speeds regardless of whether they update to a weighted TCP stack or not because they never use the multi-stream or persistence loophole to begin with.  For P2P users running a weighted TCP stack, they get to download as much as they like at maximum burst speeds because their TCP implementation will politely back off for short bursts of time when single-stream and non-persistent users are trying to use the network.

This sort of bandwidth policy would create the necessary incentives for P2P users to implement a fairer and more polite TCP mechanism.  Users can opt to continue exploiting the multi-stream and persistence loophole but they're making a choice to live with much slower connection speeds.  Normal users regardless of whether they implement a newer weighted TCP stack will get a much higher and fairer share of bandwidth because the bandwidth hogging P2P users will either be operating at much slower speeds or they'll be much more polite.

This would be a totally voluntary system where the ISP will no longer need to single out any specific protocol for bandwidth hogging so there can't even a hint of impropriety.  But without this fundamental fix in TCP congestion control, ISP have no choice but to specifically target P2P applications since those are undeniably the applications that hog the network.  Ultimately, it is in everyone's best interest to hear out Bob Briscoe's proposals.  At the very least, I think we can all agree that the current system is broken and that we need a TCP implementation that treats individual users and not individual flows equally.

<Return to top>

Topics: Browser, Apps, Networking

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

104 comments
Log in or register to join the discussion
  • Wow

    This puts a whole new perspective on the debate about internet bandwidth throttling by ISPs... I doubt the average user who wants unrestricted internet access is aware of the impact of P2P using multiple streams
    sgtgary@...
    • Yes it certainly does

      But if we can get Briscoe's weighted TCP mechanism implemented, then we can have a fair solution for everyone and P2P users won't need to be artificially and overly throttled.

      P2P throttling puts a static limit on a specific protocol and doesn't let the protocol burst past that limit even when there's nothing else going on in the network. Yet those throttled P2P users can still put a burden on the network and adversely affect other users because they won't completely get out of the way.

      Briscoe's weighted TCP solution makes everyone a winner. It lets the P2P applications take as much as they want when nothing else is happening on the network most of the time but it forces them to take a deep cut in performance for short durations of time. That allows normal applications to burst when they want and P2P users get all-you-can-eat bandwidth.
      georgeou
      • In My Honest Opinion

        I always loathed the initial blast of download speeds and how deceiving they were. If we can get some stability by taking the hit upfront, I am behind installing this patch.

        I can't say that many others will understand. Too many people can't comprehend net neutrality, I doubt they can wrap their heads around this.
        nucrash
    • George made no such illustration...

      SgtGary,

      Do you use broadband? Have you seen the behavior that George is describing? Why not?

      Because it work that way in real life. Inbound (downloading) congestion is controlled at the sending end. These ends are constantly adjusting based on their own configured limits, close-in congestion, and congestion in transit. Only in a lab, where you can control all of those variables, can you see this effect. And while that proves that it does exist, it also proves that the effective "unfairness" is not in proportional fractions to the number of open TCP connections.

      To make matters worse, George significantly overstates the "unfairness" by overstating the number of connections involved (10 to 100 is really 3 to 4) and the direction of the problem (only while uploading, in nearly all cases).

      This isn't journalism, it's more like marketing. It isn't editorializing, it's more like advertising.
      robb@...
      • You're wrong on several points

        First of all, weighted TCP mostly talks about data transmission or uploads. There are proposals to deal with the download stream as well but that's for a different article.

        Second, I have been using BitTorrent for years and I can tell you it doesn't restrict itself to 3/4 TCP streams. At the very least, I use 10 download streams and 4 upload streams for a single torrent. With 4 active torrents, I'm easily up to 16 upload streams and 40 download streams.
        georgeou
        • Who writes your articles?

          [pre] weighted TCP mostly talks about data
          transmission or uploads. There are proposals to deal
          with the download stream as well but [/pre]

          This is the second time you've responded to my critigue as if you weren't familiar with the content of your article. YOUR ARTICLE talks about downloads and impacts on downloads. That's the part I'm refuting, and you act as if I'm suggesting downloads are the problem?! Okay, so we agree -- downloads are not the problem.

          [pre] At the very least, I use 10 download streams and
          4 upload streams for a single torrent.

          [/pre]...and like we both said, download is not the problem. I said upload.

          [pre] With 4 active torrents, I'm easily up to 16
          upload streams and 40 download streams.

          [/pre]Right, but you're also transferring four files simultaneously. It doesn't compare to your HTTP illustration unless it too is transferring four files simultaneously.
          robb@...
          • Couple things

            1. I never said downloads were not a problem; I said that was a separate problem out of the scope of this article.

            "Right, but you're also transferring four files simultaneously. It doesn't compare to your HTTP illustration unless it too is transferring four files simultaneously."

            Why wouldn't it compare? You're not making any sense. You're still cheating with a 40x download advantage over the typical user who's only downloading 1 file occassionally at a time. You're also cheating with a 16x upload advantage over someone uploading a single file.
            georgeou
          • Wow, you are confused.

            Can you please stop using words like cheating? All they do is add flames to the debate. They describe intention which may or may not be there. Let's please speak with facts.

            Download doesn't matter. Downloads aren't congested. When there is no congestion, nobody has any advantage.

            [pre] You're also cheating with a 16x upload advantage
            over someone uploading a single file.

            [/pre]

            Okay, I see what you're getting at. Bob wants to upload 4 files and Alice wants to upload 1. They both should get an equally fast overall upload rate.

            And for the purposes of this argument, they do -- up to the point of congestion. Below the point of congestion, neither Bob (12-16 uploading connections) nor Alice (1 uploading connection) transmits faster than their ISP subscription allows.

            Once they hit that point of congestion, packets drop and TCP stacks force all 13-17 flows get halved. As they recover, the ceiling they'll hit first is the one imposed by their ISP subscription.

            Net result: Beats me, I'd have to test it. My theory would be, however, that we spend more time being limited by our own ISP subscription than we do by TCP/IP. The ISP limitation is characterized by arriving at a relatively nicely shaped flat plateau of some speed, and maintaining that level speed line with little volatility. The TCP fallback and recovery is charactarized by a speed graph that looks like the EKG of a heart-attack victim. Being that we spend more time bumping up against our ISP subscription limits, my hypothesis is that the uploader using 16 flows uses substantially the same amount as the uploader using 1.
            robb@...
          • Beats you pretty much sums it up

            "Net result: Beats me, I'd have to test it. My theory would be,"

            Test all you like, but nothing changes the fact that under a congested network, the multi-stream application has N times the advantage which is unfair. At this point, there shouldn't even be a debate about whether this is fair or not. It's not fair and it needs to be fixed.
            georgeou
    • George made no such illustration...

      SgtGary,

      Do you use broadband? Have you seen the behavior that George is describing? Why not?

      Because it DOESN'T work that way in real life. Inbound (downloading) congestion is controlled at the sending end. These ends are constantly adjusting based on their own configured limits, close-in congestion, and congestion in transit. Only in a lab, where you can control all of those variables, can you see this effect. And while that proves that it does exist, it also proves that the effective "unfairness" is not in proportional fractions to the number of open TCP connections.

      To make matters worse, George significantly overstates the "unfairness" by overstating the number of connections involved (10 to 100 is really 3 to 4) and the direction of the problem (only while uploading, in nearly all cases).

      This isn't journalism, it's more like marketing. It isn't editorializing, it's more like advertising.
      robb@...
  • RE: Fixing the unfairness of TCP congestion control

    Why wouldn't using QoS work just as well when what we're really talking about here is not congesting local networks? I was under the impression that placing things like VoIP and other protocols at a higher QoS (assuming that would actually work across the entire network which it doesn't right now, right?) would simply prioritize that traffic over all other.

    That would make sense for VoIP obviously, but why wouldn't giving *everything* except P2P a higher QoS not give both sides a win here? You'd make sure everything not P2P could use up all the bandwidth it wants and needs to, while still leaving whatever is left -- which ought to be the majority of capacity -- to P2P.

    Why the need to tinker below that level of management? Sure, it'd be nice to have extra capacity, but what you could be doing is changing the entire approach to seeing it. You could fill a pipe to capacity, with QoS, and not upgrade when its full but when higher QoS service quality degrades. The more the higher service stuff users, the more P2P suffers, but the pipe remains stable right up until P2P stops working altogether.

    Sounds to me, without actually knowing very much about any of this, that such an approach would be ideal. I don't know about real networking equipment, but XP at the least I know has some rudimentary support for network QoS, does it not?
    pwtenny@...
    • Why not QoS? Because it solves the wrong problem.

      QoS is great to get us over a short-term congestion problem -- when the pipe is full for a few moments.

      The problem that CATV wants to solve is the ability to compete with FIOS given CATV's quite limited upload up. The size of this upload pipe limits uploads and downloads, as it is the upload pipe that carries TCP's "acknowledgement" (ACK) packets.

      When FIOS raced through the 10+ Mbps download abilities, Cable was in trouble. They could not demonstrate 10+ Mbps throughput with only 256 Kbps available upload for overhead. So they needed a way to offer more than 256 Kbps upload without actually adding any infrastructure.

      Adding QoS does not delay adding infrastructure.

      QoS works great for short periods of time -- works great meaning all the flows are seamlessly maintained. The user experience is improved over having no QoS.

      However, during sustained congestion, something has to lose. QoS simply becomes a "lifeboat" list -- deciding beforehand what traffic lives and what traffic does. In that scenario, the user experience is often unacceptable with or without QoS.
      robb@...
    • This isn't packet prioritization, it's lower level than that

      This isn't packet prioritization, it's lower level than that. We're talking about flow rate management, not QoS. You need both mechansims to make a network work really well.
      georgeou
  • RE: Fixing the unfairness of TCP congestion control

    > But underneath that facade and illusion of fairness, a very small percentage of users hog most of the Internet???s capacity suffocating all other users and applications.

    Aren't you risking the wrath of the 'net neutralitys wingnuts and whackdoodles that infest the rest of ZDnet?
    Vesicant
  • Typo - Lyon's share?!?!?

    "P2P applications that consume the Lyon???s share of sheer volume on the Internet would hardly be slowed."

    Ummm...I think you mean "lion's" share. :)
    t_mohajir
  • This can be done at the router

    I run a small community wireless site, and I have been using my router to handle the problem of bandwidth hogs. The router is FreeBSD and I use the built in PF+ALTQ, so this gives me quite a bit more flexibility than most routers (although I have no doubt Junipers can do essentially the same thing since they are really FreeBSD under the skin). This is a older computer, with 512MB and a 1.3Ghz Celeron.

    I measure, in 5 minute increments, total bytes from one internal IP to one external IP. Anyone soaking the pipe has the priority of that state lowered. I also measure total bytes per internal IP, and will pop all traffic to/from that IP to a lower priority. ALTQ allows me to choose from a variety of congestion control mechanisms (including ECN), I am still experimenting with what works best. When an IP pair (or single internal IP hog) goes "dark" for a while, it is removed from the congestion queue.

    This has been a success! The worst that happens to a bandwidth hog is their traffic is slowed-- but only when other people (including other bandwidth hogs) are using the system. "Normal" users don't see any change. For my purposes, "normal" means "anything that does not consume all of the bandwidth for five minutes."

    As with the congestion protocols, I am still experimenting with how to properly count bytes, handle multiple hogs simultaneously, and a few other details. Part of this is picking the right monitoring tools, so I can "see" what is happening throughout the day. Overall it seems to be working-- and I can't see how it could be labeled unfair. I don't care what you are doing, or how much of it you do. But if you do it enough, you have to go to the back of the line. Fair is fair.
    RestonTechAlec
    • Cheers!!

      If you write a paper on this, or a more detailed article, I'd love to have a link or a copy: robb-at-funchords.com

      Thanks
      robb@...
    • This is a much more complete way to solve the problem

      This is a much more complete way to solve the problem. Sure, we can also use disproportionate packet dropping to balance out the per-user flow-rates. However, Briscoe is trying to fix the problem at a deeper level; the TCP standards itself. That makes it much less controversial than anything you do at the router level which can be misinterpreted as "discrimination". For some ISPs, that can get people complaining to the FCC demanding that you get fined $10,000 per user.

      Getting the fundamental TCP rules fixed is important on multiple levels. The problems are fixed on a more complete scale and the political fighting can be settled.
      georgeou
      • Deep not the right word, Elegant maybe

        While I mostly agree with George in this discussion, I have to say that calling the TCP rate algorithm change a "more complete" or "deeper" change is not really correct. It may be an elegant solution, but it is a blunt one, not quite aimed directly at the problem. Take your 10 P2P connections. Each goes to a different correspondent. The congestion exists somewhere in the net but that spot may effect only one of those connections.

        The weighting is accurate if the congestion is very close your machine, but has a very different meaning once you get more than 2-3 hops away. Perhaps that's OK though, since the weighting could be seen as a proxy for priority.

        One more thing. I have to agree also that "cheating" is not the right word. As a P2P user you are simply presenting more demand than other users and the mechanisms aren't there to treat this type of demand properly, ie as background. It's like a store with no express line. If you've got 2 items, you're screwed. It's the fault of the store, not the customer buying a month's worth of junk food.

        It's also true that the nice junk food buyer will let the 2 item guy go first, just as a matter of common courtesy.
        rjcarlson49
        • It absolutely is a cheat and it's unfair

          It absolutely is a cheat and it's unfair to the vast majority of users not using P2P. They're paying just as much for their bandwidth but they're getting sqeezed down to nothing. The ISP tries to make it a little fairer and they get accused of a crime before the FCC. It is a cheat that lets people hog bandwidth and it needs to be called out that way.
          georgeou