Facebook is offering its users another glimpse under the hood of its back-end operations -- this time with a focus on how all of its data is backed up.
The Menlo Park, Calif.-based company described that it has employed "a highly automated, extremely effective backup system" designed to support the ever-growing amount of data generated by its global user base of more than one billion members and counting.
More pointedly, Facebook's engineering team boasted that it has "one of the largest MySQL installations in the world," with thousands of databases scattered across multiple regions worldwide.
Eric Barrett, a data plumber and member of Facebook's engineering team, explained further in a blog post on Monday that the infrastructure moves around "many petabytes" each week. (For reference, a petabyte is equal to one thousand terabytes or one million gigabytes.)
Rather than extensive front-loaded testing, we emphasize rapid detection of failures and quick, automated correction. Deploying hundreds of new database servers requires very little human effort and lets us grow at the pace and flexibility required to support more than a billion active users.
More details about Hadoop clusters and long-term storage plans are available on the Facebook Engineering blog.
But one more area of potential interest for average users when it comes to backing up information is what happens in a worst case scenario and that data needs to be restored.
Beyond catching backup infrastructure and MySQL errors, Facebook's system includes its own system for "self-service restores" in which engineers can go back and restore older versions, much like you could on a PC for your own data and settings.
Barrett admitted that "backups are not the most glamorous type of engineering," but they should work so well that no one even thinks to notice them.