Think cloud outages can't be terrifying? You haven't been paying attention. Happy Halloween, everybody ...
In fall 2009, a server failure at Microsoft caused big problems for T-Mobile Sidekick phone owners: They were unable to access their email, calendar info, contacts, and personal data stored in the cloud for a week.
Lessons Learned: If you have important data in the cloud you can't afford to lose, it's worth your time to make a local backup for safekeeping.
When Yahoo! Mail underwent a major redesign in October 2013, a some users reported that mail items appeared to be missing from their accounts. Ultimately, in December 2013, Yahoo! admitted that there was a failure in 1 percent of its email accounts -- affecting approximately 1,000,000 people -- with some emails going undelivered for weeks or months.
Lessons Learned:A major frustration in this outage was a lack of communication. Yahoo! attempted to minimize the appearance of the disaster when discussing it publicly, while CEO Marissa Mayer was criticized for staying silent.
When there's an outage companies and executives need to be in front of it.
Hurricane Sandy caused lengthy power outages and extensive flooding in the New York City area, taking a number of servers offline in the region. This caused major websites to fail, including The Huffington Post, BuzzFeed, and Gawker.
Lessons Learned: Many backup generators failed during Hurricane Sandy because they were submerged in water; others failed when liquid fuel reserves ran out. Having a robust plan for continued operations through natural disasters is critical.
In September 2015, Amazon Cloud Services servers failed when they were overloaded with metadata requests from a new DynamoDB feature. As a result, many popular apps and websites (Reddit, Tinder, Netflix, IMDB) were offline for seven hours.
Lessons Learned: Most Amazon clients were caught unprepared for the outage, but not Netflix. That's because the streaming site uses "chaos engineering" to simulate disruptions, preparing itself for the worst.
In 2009, PayPal experienced a worldwide system outage for approximately five hours. The company handled $2,000 in online commerce every second at the time, suggesting the event interfered with $36 million worth of personal and business transactions.
Lessons Learned: Mobile payment systems can and do fail. Businesses should support multiple payment systems where possible to give customers options during an outage.
Initially thought to be the work of an Anonymous-connected hacker, a sizable outage struck web services provider GoDaddy in 2012. The outagee was due to corrupted router table data. Service was down for six hours, putting countless websites and email inboxes out of commission.
Lessons Learned: GoDaddy was criticized for having poor infrastructure, poor support and poor communications throughout the crisis. If you have business-critical data in the cloud, it is imperative that you choose your provider wisely.
A power outage at a Xerox Corp. data center wrecked havoc on America's food stamp program in 17 states for more than 11 hours, removing the spending limits from SNAP EBT cards.
This led to supermarket shelves being "decimated" as shoppers piled their carts full of groceries. Other supermarkets stopped accepting SNAP cards entirely.
Lessons Learned: While relatively rare, long cloud outages can happen in crucial areas. It's important to have a game plan in place for dealing with these outages -- many stores simply turned away hungry customers, inciting panic.
On Aug. 22, 2013, a software bug in the NASDAQ backup servers caused a total failure, taking the stock exchange offline for more than three hours. When it came back online, traders rushed to sell shares of NASDAQ itself, pushing it 5 percent lower.
Lessons Learned: Shortly after the outage, NASDAQ proposed design changes to its Securities Information Processor, "including architectural improvements, information security, disaster recovery plans and capacity parameters."
In June 2010, power failures took down both the main and backup servers at financial services company Intuit (Quickbooks, TurboTax). Businesses lost access to their books for 36 hours. Making matters worse, a second failure occurred at Intuit less than a month later.
Lessons Learned: If you need guaranteed, round-the-clock access to your important financial data, keep an offline copy.
At the end of 2010, Microsoft ran a script to delete a number of dummy Hotmail accounts the company had used for testing purposes. An error in the script instead deleted 17,000 legitimate accounts. Account recovery and service restoration took three days in most cases; longer in others.
Lessons Learned: We all have our preferred email provider. To play it safe, however, it's worth maintaining an account with a second provider to maintain communications in case of emergency.