Google speeds up Cloud Platform with new networking algorithm

The BBR ("Bottleneck Bandwidth and Round-trip propagation time") algorithm does away with the assumption that packet loss equals network congestion.
Written by Stephanie Condon, Senior Writer

Aiming to make its public cloud offerings faster and more reliable, Google on Thursday announced the Google Cloud Platform (GCP) is now using a new congestion control algorithm called Transmission Control Protocol (TCP) BBR.

In comparison to the previously used congestion control algorithm CUBIC, BBR ("Bottleneck Bandwidth and Round-trip propagation time") has delivered higher throughput, lower latency, and a better quality of experience across Google services.

BBR should benefit GCP customers in a couple of ways: First, they'll have faster access to their data when using GCP services like Spanner or Bigtable, since traffic from the GCP service to the application is sent using BBR.

Additionally, internet users should get faster access to a GCP customer's website. When a GCP customer uses Google Cloud Load Balancing or Google Cloud CDN to serve and load balance traffic for their site, the content is sent to users' browsers using BBR.

Google developed BBR by effectively re-writing the rules of congestion control. Typically, such algorithms are loss-based, meaning they decide how fast to send data across networks based on indications of lost packets. BBR, by contrast, builds a dynamic model using recent measurements of a network's delivery rate and round-trip time.

According to Google's tests, BBR's throughput can exceed CUBIC's by as much as 2,700 times, with queueing delays as much as 25x lower.

BBR is already powering TCP traffic from Google.com and YouTube. By more effectively finding and using bandwidth offered by the network, it improved YouTube's network throughput by 4 percent on average globally and by more than 14 percent in some countries. It's also kept network queues shorter, reducing round-trip time by 33 percent.

Editorial standards