X
Business

Google's three rules

They roll out new applications for millions of users with surprising speed, especially compared to corporate IT. They build data centers with hundreds of thousands of servers - and millions of disk drives - and run it all on free software.
Written by Robin Harris, Contributor

They roll out new applications for millions of users with surprising speed, especially compared to corporate IT. They build data centers with hundreds of thousands of servers - and millions of disk drives - and run it all on free software.

Costly corporate kit, like RAID arrays and 15k FC drives, aren't used. Yet they do more work in an hour than most companies do in a year.

Google's IT capabilities are a modern wonder of the world. Underneath the complexity though are just three simple rules. Rules that no enterprise data center (EDC) would ever think of following.

I'm attending the Google scalability conference (see a short version of the agenda here) in Bellevue, WA tomorrow, which got me thinking about the Google rules of IT.

This is not your father's data center How does a Google data center differ from an EDC? Other than using electricity, in just about every way that matters.

Cheap The key to Google's competitive strategy is that they have the cheapest compute, network and storage (CNS) in the industry. Free or home-made software. Mass produced - by Intel, these days - servers-on-a-board with network, storage and energy-efficient dual-core processors. SATA drives. Unmanaged 48 port switches.

EDCs don't care about cost. They focus on uptime. The low-volume hardware they buy is reliable and very expensive. As a result, EDC services growth is much less than Moore's Law. EDCs are nursing homes for aging apps, not hotbeds of innovation.

Embrace failure Cheap also means things break. And when you've got several million servers, lots of things break every day. Get over it. Google expects failure and builds recovery into the software layer that connects the cheap kit.

The EDC buys low-volume kit that tries to engineer-out failure. Google gets uptime by building failover on top of the hardware, not into it. Today's data center guys break out in hives just thinking about it. Twenty years from everyone will do it that way, but not today.

Architect for scale This is the flip side of cheap. Google hired some of the best minds in the business to architect for scale. They have multiple 8,000 node clusters that they've talked about and I wouldn't be surprised if they've got some up 16,000 nodes.

Architecting for scale leverages cheap CNS to give Google the lowest-cost growth as well. Competitors such as Yahoo, who rely more on standard EDC products, can do the same things as Google, but it costs them about 10x in capital expense and several times the operations expense.

Fast growing applications play to Google's strength.

The Storage Bits take If Google has it all figured out, why the scalability conference? Good question. I think they'd like more scalability: 40,000 node clusters; 4 million processor data centers; exabyte storage. This isn't just about gluing the bits together to get work done, either. They want lower power consumption, cheaper hardware, faster protocols and better software.

This is more than first-mover advantage. The faster they can grow, the greater their cost advantage over smaller, less nimble competitors. Their ROI brings them cheap capital, which increases their ability to invest in new businesses and more capacity. The higher their volumes, the cheaper growth becomes. A perfect storm.

Google is not invincible, by any means. Their marketing is pathetic. The concentration of power in the hands of three largely untried individuals means a major cock-up is only a question of when, not if. The stagnant share price puts pressure on management to increase returns by cutting back on costly perks. Google's purpose-built infrastructure is also relatively inflexible: they can't just paste on ACID transaction processing.

But that is all in the future. Tomorrow I'm looking forward to hearing about the latest in scalable systems from the industry's leading innovators.

Comments welcome.

Editorial standards