How to avoid the amateur cloud

How to avoid the amateur cloud

Summary: When people take outdated data center management practices and label them as cloud computing, that's amateur cloud. There's a lot of it about and it won't be easy to spot, making the transition to cloud computing lengthy, troubled and painful.


Someone this week asked me, what's the cloud equivalent of SoSaaS? What do we call it when people take outdated data center management practices and label them as cloud computing, even when they fall far short of what's required? We already have the name, I replied, thanks to the events of the past week: amateur cloud.

There's going to be a lot of amateur cloud in the market for the next few years, and businesses of all sizes will have to be intensely wary of the pitfalls when they go shopping for cloud services. Amateur cloud won't be easy to spot, and often it'll be operated by huge, reputable companies with long, honorable track records in computing and data center operations. In many cases, businesses will knowingly choose amateur-cloud providers for reasons of cost or habit. As a result, the transition to the cloud computing era is going to be lengthy, troubled and painful.

The past week's Sidekick debacle has been an object lesson in the full perils of amateur cloud. The hit to the reputations and brand image of Microsoft and T-Mobile has been massive. To its credit, Microsoft has pulled out all the stops and seems well on the way to recovering the lost user data, which will go a long way towards restoring its cloud credibility. But at what cost? — not only in direct resource costs but also the unseen cost of top-level crisis management that has had to be devoted to the rescue exercise. One silver lining (though scant comfort for those who suffered directly) is that every such failure has the welcome side-effect of driving home to all cloud providers the risk exposure that amateur cloud represents. Many will now be re-examining their vulnerability and tightening up procedures or strengthening their infrastructure, all of which helps raise expected operating norms a few notches higher.

One can't help feeling sorry for venerable, established players like IBM, berated by Air New Zealand's CEO for last week's data center outage, and Hitachi Data Systems, caught up in the incident that caused the Sidekick data loss. As several of the Talkback commenters to my previous post have argued, it's not as if they've done anything different from what they've always done in the past. They weren't even attempting to operate as cloud computing facilities (although Sidekick's users certainly regarded it as a cloud service and trusted it as such).

Yet somehow in the space of a few short months, the world has changed. Suddenly, every online service is being measured against cloud standards. What was once state-of-the-art in data center operations has now become second-rate — and it's happened almost overnight, as though someone flicked a switch. Not so long ago, the IBM team would have won plaudits for bringing back an unexpectedly powered-off mainframe transaction system in less than six hours, especially on a Sunday morning. But now, the old standard operating procedures are no longer acceptable for today's always-on, ultra-connected, high-throughput cloud computing environments. Today's higher cloud computing standards mean any failure is exceptional and top management had better be on the phone apologising profusely for any disruption from the get-go.

So what are the missing features to look out for when assessing whether you're being offered amateur cloud in place of the real thing? The key attributes that tend to get overlooked by conventional providers fall into three main categories:

  • Cloud-scale operation. A cloud data center has to serve hundreds or thousands of separate businesses and this requires an infrastructure far beyond the capabilities most individual enterprise data centers aspire to. We're talking resilient, high performance that's always on, ready to flex up or down on demand and able to sustain high-volume peak loads to support users in their millions across all hours, business cycles, geographies and timezones. Even planned downtime is kept to a minimum, using redundant capacity where possible to apply patches and upgrades. Naturally, there's the highest level of hardened security, and a fully tested disaster recovery plan.
  • As-a-service model. As well as scaling up and down on demand on a pay-for-usage basis, the as-a-service model emphasizes a customer's need to have:

    • visibility into operational performance;
    • enough choice to ensure proper governance of the environment;
    • a fine degree of delegated control over matters such as provisioning, configuration and service level commitments.

    This depends on a higly instrumented and automated service delivery infrastructure that few conventional data center environments support.

  • Web-scale infrastructure. A true cloud operates a pooled, multi-tenant infrastructure — sharing not just the compute platform but every aspect of the infrastructure, including connectivity, service delivery components and integration services. This is a very different environment than you'll find in a data center designed to support individual enterprises using single-tenant architectures. It requires that all infrastructure resources be rigorously componentized and flexibly accessible via a web API. Pooling the infrastructure produces huge savings in aggregate cost of ownership and operation, while its exposure to the varied demands of many different customer requirements ensures it is tuned and strengthened in the most cost-effective way to deliver optimum security, reliability and performance to all.

One of the biggest problems facing the industry today is that customers expect all this extra availability and customer service at no extra cost. Real-time failover capability to a fully functional alternative data center doesn't come cheap. Many providers are meeting market price pressures while keeping their fingers crossed behind their backs in the hope of avoiding catastrophe until they can afford to invest in higher resilience. Few are prepared to come out straight and tell customers the true cost of adding full redundancy — and if they did do that, many customers would still choose to forgo the extra cost in favor of cheaper options. Just like the subprime mortgage market, there's a conspiracy of silence around the true long-term risks as providers and customers alike both close their eyes to the underlying fundamentals.

Meanwhile, many of the best-known and most trusted operators of enterprise data centers are running infrastructure that dates from up to a decade ago. Old-school data center operators, especially the likes of IBM, EDS/HP and Accenture, are lumbered with acres of costly, out-dated plant that's rapidly becoming unfit-for-purpose in the cloud computing era. This leaves them in an unenviable position, outclassed as amateur cloud and yet facing a huge hit in profitability or write-offs if they're to upgrade their facilities to meet today's expectations.

That's why I'm warning that we'll see many more failures of so-called cloud services in the next couple of years. Not because of fundamental flaws in the cloud model itself, but because few providers and customers are economically or culturally ready to properly embrace cloud computing. That will perpetuate confusion about what the cloud really is and what standards and architectures are required to implement it. It'll take a succession of amateur cloud failures before we arrive at mainstream acceptance of what constitutes an acceptable standard and best practice for cloud infrastructure. Until that time arrives, many cohorts of unwary or ill-advised enterprises and individuals will find themselves unwitting victims of the amateurism of would-be cloud computing providers.

[Disclosure: over the past few months I've explored the concepts outlined above in several white papers or analyst reports funded by consulting clients, all of whom operate cloud services. I'd like to acknowledge their funding and in particular cite The Acid Test for the On-Demand Data Center (NetSuite), Enterprise Meet Cloud (OpSource) and Redefining Software Platforms (Intuit). These companies contract my services not to influence my opinion but because they know I'm already on their wavelength. Regular readers of this blog will be familiar with my incipient bias in favor of cloud services.]

Topics: Storage, Cloud, Data Centers, Hardware, Virtualization

Phil Wainewright

About Phil Wainewright

Since 1998, Phil Wainewright has been a thought leader in cloud computing as a blogger, analyst and consultant.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • You are clueless

    Maybe a large company would do enough research to weed out a bad cloud provider from a good one, but the fact is cloud services are being pitched at **individuals**.

    No one will do the required research, and you know it. But as a nice cloud evangelist that's your escape clause: Cloud services are great, and if you have a bad experience, it's because you didn't pick the right provider.

    Forgive me if that is a prime example of self-serving logic.
    • ..Really? mean the cloud ends at Google Docs and Facebook?..

      Then what is NetSuite? What is Salesforce?

      I think the backs of your ears need to dry up a bit more before you start commenting here, sonny.
      • Right back at you

        Quoting ERP software to me means you obviously didn't take the extra 5 seconds to comprehend that my point was that cloud services are being aimed at individual users, and those users don't have a clue, the time, or the resources to research a good cloud from a bad one.

        So go back to your Salesforce and take a nap. I'm sure the UPS alarm will wake you when the time comes.
  • New Technologies Breed Confusion

    Certainly, the best cloud providers should offer all the niceties ennumerated in your blog.

    But companies who bought hosting rather than cloud computing had those choices, presumably (high availability is always a matter of how much you want to pay for), and chose to save money, hopefully with full knowledge of what they were choosing.

    The problem is, once something better is available, people want that -- or think they should immediately have it -- without thinking about what they signed up for.

    IBM is scarcely an amateur anything. It provides public and private clouds galore. But customers who earlier or recently signed up for a different kind of on-line service are inevitably confused about what they've got versus their expectations of what is now possible.

    What we've got on our hands is a huge educational effort, trying to tell all types of customers to check into what they've already got to make sure they will be satisfied with that -- and changing it if necessary, as well as explaining to new buyers what their range of choices (and downsides) now looks like.

    I expect one way we'll tell the difference between amateurs and professionals in the cloud game is what they offer and how they execute. I'm thinking a good proof point for IBM will be whether their iNotes on line email offering can provide the high availability, graceful failure and rapid recovery model we expect.
  • Beware of cloud adjectives

    I think you're quite right in pointing out that there is considerable hype around the cloud, and that service providers often slap the cloud moniker onto services that don't offer the basics of cloud technology or service levels.

    However, it's also dangerous to blithely use the terms "fool's cloud" and "amateur cloud" since they can just as easily become epithets that cloud providers hurl at each other in a marketing war. "My cloud's the only real cloud" "No, mine!" "No, mine!"

    For example, I came to this blog from a link in an email I got from OpSource regarding their cloud. Yet, from what I know about their cloud, it is missing technology and characteristics that other vendors - including my company - consider essential to cloud computing. And there are major cloud customers and technology suppliers that would disagree with your assertions as well.

    Which brings us back to the usual discussion, which is "what is cloud computing?" As you can see from the responses to your blog, the readers are still confused - or at least not in agreement.

    Instead of labeling clouds as "real," "fool's" or
    "amateur" - which are hardly adequate to describe suitability to task - it would be more productive to consider the various types of clouds and the best applications for the technology, since it's unlikely that the "one-size-fits-all" definition of Cloud will persist for long. Especially since enterprises measure cost of operations in many ways other than dollars per CPU-hour.

    Best regards,

    -Eric Novikoff