The news of Michael Jackson's death and subsequent coverage of his memorial service brought the Internet to a crawl and saturated even the most robust of content delivery networks.
When the King of Pop died last week, I swore that I was going to be tasteful and not cover it. I was going to take the opposite route -- focus on pure technology, and let the other opportunistic bloggers grab the hits on bottom feeder clickwhore material. In all fairness, too much of my subject matter in the last few weeks has been bordering on the frivolous and I wanted to get back into writing about hardcore subjects, traffic be damned, my street creds are at stake here.
I decided to load my plate by covering the Apollo 11 40th anniversary with an article series about the companies that made the historic moon mission happen, and I also started playing with my Optimum Ultra high-speed broadband to see how fast it could go. Between those two, I figured it would keep me safely out of Jacko territory.
I was wrong.
Click on the "Read the rest of this entry" link below for more.
As I explained in my other piece this week about my Optimum Online Ultra testing, I contacted the folks over at Limelight Networks for access to a high-speed server that was directly connected to Cablevision's internal network backbone so that I could attempt to saturate my 101Mbps network connection and see if I could get anywhere close to Optimum's advertised up to 12.5MB per second sustained transfer rates.From a practical standpoint and from years of experience working with enterprise networks, I knew that the realities of protocol overhead would prevent this from happening, but I knew that a 70 to 80 percent utilization was possible, particularly if I was connected to a very low latency server (under 10ms) with large amounts of incoming bandwidth and that if I did the transfers during off-hours.
I chose Limelight Networks specifically because I knew they had performed extremely well when hosting the 2008 Summer Olympics videos for NBC, and that they had a very fast fiber backbone with a large network of globally replicated data centers. So I ran my first series of download tests between 10 and 11PM on the night of July 7th, which is well within "after hours" for the New York Metro area.
As you can see by comparing the two sets of identical large file transfers, On July 7th, I was able to achieve a peak transfer rate of 4.36MB per second, with an average of 3.92MB per second. This translates to roughly 34.88 Mbps on the high end and 31.36 Mbps on the low end, both of which are substantially below the 12.5MB per second (101Mbps) advertised by Cablevision. Again, this is a server that is less than 3 hops away from Cablevision's backbone and has tremendous amounts of incoming bandwidth. I also tried a transfer against a Boston-based ISP with 3gigbits of incoming bandwidth and came closer to 9.5MB per second (76 Mbps) on a 579MB ISO file.
[EDIT: A re-attempt at the Boston-based 3gig provider on July 9th yielded a burst transfer rate of 11.4MB per second, or 91Mbps on 579MB ISO]
So I sent these results over to Paul Alfieri at Limelight, who's been helping me set up these tests. Paul wants to help figure out why we're not maxxing out, but his engineers are "Busy". Okay, no problem, we'll get to this tomorrow.
I email Alfieri again today, to see if I can get access to his engineer.
Got a note from one of our engineers who read your post – he said to read the TCP RFC. Not being a geek like him, I’m not sure what that means, and since he’s on the West Coast, I won’t have an answer until later. But hopefully you have an idea what that is and might be able to take another look at the performance.
Now, for those of you paying the attention "Read the RFC" is a nice way of saying "Do your own homework and get an education". He wants me to study this document because clearly I am a network neophyte compared to his godliness. The last time I read it probably was when I was in college. It was the document that defined the TCP protocol for the Defense Advanced Research Project Agency, written in 1981.
Then it dawned on me. Could it have been... JACKO?
FaceBook and a number of other large news sites such as CNN were performing horribly last night. Everyone was tuning into the Jacko memorial service stuff, and were streaming cached video content of the replays. Limelight is a CDN that does this for other news organizations as well. Was I doing a transfer test from a CDN at the worst possible time?
So I decided to run the benchmarks against Limelight again, this time, today, July 8th, around 2PM in the afternoon during peak business hours. The result? A max sustained transfer of 7.72MB per second, or 61.76 Mbps. That's effectively twice what I had the previous night. I suspect that if I run these again on off hours tonight or tomorrow as Jacko news becomes more and more of yesterday's fish wrapping I'll get closer and closer to the 3gigabit pipe transfer I did, perhaps even exceed it.
I asked Paul about the impact of Jackdotting (That's a new word I invented which combines Jacko with Slashotting, which means the effect of a major news story or celebrity death on global Internet content delivery) on Limelight's network. His reply:
On MJ, we actually performed really well and didn’t have to cap bandwidth, etc. We’re still counting streams, but think it was already more than the Obama election or any single event at the Olympics. Incredible…
Paul would not give me specific metrics in order to maintain confidentiality, but I believe him about the event being record-breaking. I got a more concrete response from Shari Foldes at USTREAM, which is the CDN that provides video streaming for CBS, ZDNet's parent company:
Ustream hosted nearly 4.6 million total streams and 1.6 million total uniques from fans around the world to watch and interact with the Michael Jackson memorial service on Ustream, including its breaking news feed from partner CBS, making this the largest event ever to be hosted on Ustream.
Unlike other video feeds of the event, the streams were worldwide and not restricted to the United States, and included chat and Ustream's Social Stream. Over 12,000 messages in the chat rooms were sent per minute.
Ustream was also the only place to watch the service on the iPhone, via the Ustream Viewing Application.
For the Obama inauguration, Ustream powered more than 3.8 million streams, with 400,000 concurrent visitors watching during the oath and speech.
ZDNet Editor in Chief Larry Dignan reported yesterday about the huge spikes in traffic on the 7th on all the major content delivery networks, all of which experienced some sort of throughput problems no matter how big their infrastructure was. Based on the time frame of the published metrics at Akamai and Ustream, I have good reason to believe that I was hitting Limelight smack dab in the middle of the massive Jackdotting storm. So they may not have been "capping" my download, but I was competing with everyone and his brother for access to their network.
So the big question remains -- have we learned anything about unplanned media streaming and how to prepare for it? In the case of the Olympics and the Obama inauguration, CDNs had ample time to tweak their networks and prepare for what was necessary, but the death of a major rock star icon brought the Internet to a screeching halt. It's clear to me that there needs to be some type of partnership between the CDNs and the Tier 1s to account for "Emergency" streaming situations for cached video files that exceed normal day to day bandwidth requirements. Planned events are one thing, but when the King of Pop dies, all bets are off.
Do CDNs need to better plan for future 'Jackdotting?" events? Talk Back and Let Me Know.