X
Tech

Can Twitter's backbone handle sudden jump in popularity?

For the past week or so, I've been noticing - and hearing some rumblings about - a greater-than-normal number of overload errors spewing from Twitter. Even though the overloads likely are stemming from the recent mainstream Twitter craze - which is a good thing - it's clear that Twitter is not equipped to be handling this level of traffic.
Written by Sam Diaz, Inactive

For the past week or so, I've been noticing - and hearing some rumblings about - a greater-than-normal number of overload errors spewing from Twitter. Even though the overloads likely are stemming from the recent mainstream Twitter craze - which is a good thing - it's clear that Twitter is not equipped to be handling this level of traffic.

That's not a good thing.

In the month of February, U.S. traffic to Twitter was up more than 1,000 percent from a year earlier, according to comScore. Worldwide, the number of visitors in February was up more than 700 percent from the previous year. It's probably safe to say that the growth curve will stay on the upswing for March and April, too. That means more traffic and more overload messages.

Over at the Tech Broiler blog, Jason Perlow has put up a post that explains why he thinks the backbone technology that's processing these millions and millions of tweets is insufficient - especially given the growth numbers. Perlow refers to the processes as transactions - yeah, just like a major financial institution might manage. He explains that at any given time, millions of rows of Twitter data are being both written and read, which is different from, for example, Google's search engine, which tends to be more "read-intensive." Perlow writes:

Google is able to distribute their database and “spiders” the web as it needs to, inserting data into it using a sophisticated algorithm and fully in control of when it happens. While many millions of users query Google’s database on demand, data isn’t inserted into it so quickly that its requirements are I/O bound at the database level — much of what is being queried is being pulled out of cache, which is residing in memory. Google is able to minimize the number of disk writes and stripe reads over a large amount of systems using fairly off-the-shelf components. So can Twitter do the same thing as Google and build a large distributed cluster of systems, such as with Aster Data’s “Beehive” or a few hundred racks of Kickfires to solve the problem? In a word, no.

Perlow - who fully discloses his IBM connection - says Twitter needs a mainframe. Maybe two. I can't speak for the needs of Twitter's backbone with any real degree of expertise. But I do now that whatever is back there powering Twitter today obviously is not beefy enough to handle the growth in traffic. Sure, we're in tough economic times and an investment in a mainframe might be tough to swallow - especially since Twitter still hasn't announced a solid plan for how it plans to make money.

Also see: Twitter: A fine 'pre-business' but un-monetizable and a deadly acquisition target

But Twitter can't just keep throwing up error messages and Fail Whale screens. How will the company grow membership - or even hang on to the members it has now - if the experience starts becoming more of a hassle than it's worth? It has to make an investment of some sort - and Perlow goes on to explain why different approaches won't work:

Twitter’s performance problems cannot be solved in a massive scale out distributed systems manner.. Twitter is doing all those simultaneous row inserts while it is doing reads, which is murder on CPU and on IOPS and you would eventually run into scalability problems, and the build out costs using [mid-range] systems would be cost prohibitive.

Editorial standards