X
Business

Delivering the Olympics: Akamai and Limelight respond

Akamai disputes Limelight Networks' take on its infrastructure and my recent post, Limelight Networks: Why the Olympics didn't 'Melt' the Internet. Limelight, however, says its implementation facts are accurate and that it absolutely stands behind its words.
Written by Jason Perlow, Senior Contributing Writer

Akamai disputes Limelight Networks' take on its infrastructure and my recent post, Limelight Networks: Why the Olympics didn't 'Melt' the Internet. Limelight, however, says its implementation facts are accurate and that it absolutely stands behind its words.Keep in mind that the two companies compete fiercely and have different takes on their respective content delivery approaches. (See Larry Dignan's Akamai: Broadband consumption clouds outlook.) Here, in full, is Akamai's response to the Limelight infrastructure from David Belson, Director, Market Intelligence; followed by Limelight's response from Paul Alfieri, Senior Director, Corporate Communications.

Akamai's response:

I read with interest your recent blog post on ZDNet entitled "Limelight Networks: Why the Olympics didn't 'Melt' the Internet."  However, I feel compelled to correct a number of inaccuracies in your post.

Akamai uses a centralized data hosting infrastructure with big Internet pipes that mirrors content that is hosted on a customer's own servers. Usually with the aid of a special caching appliance installed at the customer's ISP or edge network, the request to download that content is re-directed to Akamai's own servers and fat Internet pipes.

Akamai in no way, shape, or form uses a "centralized data hosting infrastructure", and it is simply incorrect to claim that we do.  Akamai hosts our servers in nearly 1000 networks, including backbone networks, end-user ISPs, and educational institutions.  We also do not simply "mirror" content -- Akamai caches content requested by end users, and we do so on a "pull" model -- we do not simply "mirror" content out to our network.  In addition, requests to download content are not re-directed to Akamai, as you claim -- they are sent to the Akamai servers initially, without any re-direction, as content providers set up DNS records to point their hostnames directly at Akamai.  (Note that there is an important distinction between the DNS-based request routing used by Akamai, and a method in which an customer's server takes the first request and then issues an HTTP redirect to an Akamai server.)

Click on the "Read the rest of this entry" link below for more.

As fat as Akamai's pipes are, I've seen MSDN's downloads slow to a crawl during peak download periods, such as the days following Windows XP SP3 and Windows Vista SP1's release.

While Akamai's servers do have excellent connectivity and an intelligent mapping system that ensures users are connected to the optimal server, there are also other factors outside of our control that can impact your download experience.  (And this would be the case for any CDN, not just Akamai.)  How many of your colleagues on your local network (or your neighbors, if you were at home) were also trying to download SP3/SP1 at the same time?  What about your upstream network provider?  If congestion is occurring between an Akamai edge server and the end user (due to local network problems), that will likely impact the download experience an end user has.

As it turns out, Akamai is actually used for some, but not all of the cached content used on NBCOlympics.com - it hosts the "static" content such as the .JPG files and HTML. However, for all the heavy lifting, such as the streaming video, it's all going through infrastructure hosted by Limelight Networks.

This is not entirely correct.  Akamai is delivering and accelerating the static and dynamic content for nbcolympics.com.  This is particularly important to note, as if users can't reach the Web site, then they won't be able to get to the streams.  In addition, while the streams (the so-called "heavy lifting") are delivered by Limelight, Akamai is enabling NBC to monetize this content, as we are delivering the advertisements that wrap the streams and the player.

216593-480-269.jpg

Above: Limelight Networks' Operations Center in Tempe, AZ.

I suspect that this picture is more likely a data center photo, as I would presume that their NOC is staffed, with NOC Analysts monitoring various network components & events.

They are a Tempe, Arizona-based company which operates a global network of fiber-optic interconnected datacenters. Their backbone is capable of 2 Terabits (Tbps)  per second of sustained data transfers...

This statement is not entirely correct.  Limelight leases data center space in large hosting facilities (such as Equinix and other similar facilities), and interconnects these with fiber leased from Global Crossing.  In addition, while they have a "backbone" that interconnects these data centers, the backbone itself is not capable of sustaining 2 Tbps.  As they noted on their recent earnings call, this is their aggregate network capacity -- that is, the claimed outbound capacity that they can supposedly maintain in delivering content to end users.

Where Limelight differs from Akamai and why the Internet didn't "melt"is quite simple - they are completely "off the cloud".  In other words, unlike Akamai and similar content caching providers, their system isn't deployed over the public Internet.

Again, simply incorrect.  Limelight may use a dedicated "off the cloud" backbone to interconnect their data centers, but in order to deliver content (streams) to end users, they absolutely, positively, must use the public Internet.

When you download videos from NBCOlympics.com, your computer isn't actually going to the Internet to get content. In fact, the content is usually no more than 2 router hops away from your ISP. Limelight has partnered with over 800 broadband Internet providers worldwide (such as Verizon, Comcast, Road Runner and Optimum Online/Cablevision) so that the content is either co-located in the same facility as your ISP's main communications infrastructure, or it leases a dedicated Optical Carrier line so that it actually appears as part of your ISP's internal network.

In most cases, you're never even leaving your Tier 1 provider to get the video.

Your computer *is* going to the Internet to get content -- it does so as soon as it leaves your cable modem, or corporate router/firewall.  In addition, given the architecture of the Internet, 2 router hops, as you claim, generally won't get you terribly far across a network -- maybe to another city, but likely not all the way to Limelight-delivered content.

In contrast, Akamai deploys servers *within* many broadband Internet providers and other similar networks, so that, in fact, users are never leaving their local ISP to get content.  In addition, your statement about the relationship that Limelight has with these 800 network providers is misleading, in my opinion -- they claim to peer with these network providers -- it is not so simple as it being "co-located" in the same facility.  There are a number of challenges inherent in relying so heavily on peering to deliver this content -- as soon as it hits the broadband network, Limelight loses the ability to control the quality of the content.  (Contrast this with Akamai's deployment of servers within these networks -- we can control quality all the way out to the edge.) Finally, "Tier 1 providers" are generally considered to be the large international backbone providers -- AT&T, Level 3, etc. -- if a user has to go all the way up to a Tier 1 provider to get content, then the chances for a high-quality experience drop significantly.

 .....which replicates all 3000+ hours of Olympics video to its global network of ISP co-located data centers and is queued for media streaming at the very edge your local ISP's network (see network trace screen shot above).

Limelight's servers are not located at the "very edge [of] your local ISP's network" as you claim.  They are, at best, located across a peering connection which could be taking place hundreds or thousands of miles away, or across a transit connection, which could incur similar distance-imposed latency, not to mention increased opportunity for packet loss and congestion.  Again, contrast this with Akamai's massively distributed architecture, in which we place servers directly within the end user's network, enabling them to stay truly on-net to retrieve this content.  In addition, you make reference to a "network trace" screen shot -- my assumption is that you should have been referencing a traceroute, showing the path to the Limelight server(s), but none of the windows in your screenshot are of a traceroute -- the "DebugView" window simply shows active network connections (with IP addresses), and "York" is simply a network sniffer tool -- neither of these tools is showing any type of "network trace" as it is commonly defined, and as would be used to make the point that you are attempting to make. (Editor's note: we were dissatisfied with the screen shot as well, as Limelight's firewalling from their edge server to Optimum Online prevented us from doing screen shot of a proper traceroute indicating total number of hops. We're hoping to rectify this shortly with an internal traceroute shot-- JP)

Localized content caching is going to be the wave of the future, especially when we start seeing lots of "On Demand" content being offered from next-generation media delivery services.

Frankly, this is old news.  Akamai has been offering truly localized content caching, placing servers in local ISPs since 1999 to deliver content to their end users.  We have been handling the delivery of "on-demand" content in a massively scalable fashion for nearly 10 years now, and our deployment plans & roadmap have us continuing to push further and further towards the end user.

The near-flawless operation of the live video streaming from NBCOlympics.com over the last week is proof that localized content caching technology works.

Your claim here is technically incorrect.  If the streaming is indeed live, then "content caching technology", whether localized or not, would play no part in it, as live streaming cannot, by definition, be cached.

I'm disappointed that ZDNet, an organization with a strong technical heritage, both in print and online, would choose to publish an article/blog post with so many glaring technical errors.

I'd be happy to further educate you on the role that Akamai has played in delivering Olympic content, including streaming video internationally

Remember, the US isn't the only place where people are following the Olympics.  In addition, I'd be happy to provide additional education on the difference between centralized (Limelight) and distributed (Akamai) CDN architectures, and the overall advantage that distributed architectures hold, especially when it comes to the massively-scalable requirements for delivering large amounts of high-quality video content.

Limelight's response:

1. Regardless of his interpretation of our architecture, the fact of the matter is the Olympic video "just plain works" (Your words in the original article). Akamai was NBC's incumbent CDN, but we got the Olympics deal because NBC judged our approach to be one that best suited the type of online event they wanted to create.

2) In addition to handling Olympic video, we also delivered updates for MS Patch Tuesday this past week, and continued to support customers Amazon Unbox, Netflix's streaming service, MySpace, and even your sister company CNET Networks (+ 1300 other customers). Specifically on patch Tuesday, no issues were reported there akin to what you saw with Akamai and MSDN, even while we were delivering all of that Olympic video -- and not just video for NBC, but content for international broadcasters as well.

3) "In addition, while the streams (the so-called "heavy lifting") are delivered by Limelight, Akamai is enabling NBC to monetize this content, as we are delivering the advertisements that wrap the streams and the player" - I'll remind you, if the video looks terrible, no monetization is happening.

4) Limelight leases data center space in large hosting facilities (such as Equinix and other similar facilities), and interconnects these with fiber leased from Global Crossing." -- While we do lease some fiber, we own much of our own fiber. We went public last year - check out our S1 and you will see this to be the case. Additionally, the pictures  are from our own facility.  Besides, who cares if we lease or own? Why is this an issue? We are connecting storage at the speed of light to last mile networks, and as long as we can deliver QoS (as we are with the Olympics), who cares who owns the fiber lines?

5) "They are, at best, located across a peering connection which could be taking place hundreds or thousands of miles away, or across a transit connection, which could incur similar distance-imposed latency, not to mention increased opportunity for packet loss and congestion." -- I'll remind you that with an optical network, "a thousand miles" can be traversed by a packet in a matter of ms because everything moves at the speed of light. And when you own your own network, you have complete control over the conditions of that network so "congestion" or "packet loss" is rare. In the instances when we lease, our peering/transit relationships look different than Akamai's because we have capacity to offer back to the Tier 1 telco. It's not a straight up lease. This means we can negotiate better terms.

6) "Again, contrast this with Akamai's massively distributed architecture, in which we place servers directly within the end user's network, enabling them to stay truly on-net to retrieve this content." -- Do they really put 5PB of storage inside the end-user network? I think they only cache the most popularly requested content. This means that for long-tail projects like the Olympics, some videos are going to take longer than others to get to the end user, because Akamai has to "pull" it from farther back in their chain by traversing the public Internet. Contrast that with our approach, where a 5PB server farm is linked at the speed of light to the edge of the last mile network.

7) "Your computer *is* going to the Internet to get content -- it does so as soon as it leaves your cable modem, or corporate router/firewall." I think this is a bit of semantics. We see the public Internet as something separate from the last mile access network; they may not agree with that definition. Again, regardless of what you call it, I point to the Olympics video as evidence that our approach works.

Editorial standards