Exploding three 'myths' about cloud-based development

Exploding three 'myths' about cloud-based development

Summary: Stuff happens, no matter how many 9s are promised. Developers are advised not to get too enamored with the cloud.

SHARE:

Ted Dziuba is not one to pull punches or sugarcoat his analysis of the crises in today's development shops. In a post following the recent Amazon outage, he points to assumptions about cloud computing that may be leading developers astray -- here are three myths he relates:

Service-level agreements are meaningful: Stuff happens, no matter how many 9s are promised, Dziuba says. Don't worry too much about the percentages, and think about what should happen after a downtime incident -- does the service provider scramble to move heaven and earth to fix the problem and let you in on their progress? "What I'm really looking for is communication. I logged a ticket with support, and in six minutes they updated me about the situation, how widespread it was, and an ETA on the fix."

Architecture will save you from cloud failures: As Dave Linthicum says, enterprise architecture is key to having cloud services make sense to the business. But architecture does not protect service consumers from systems downtime, Dziuba reminds us. "No amount of architecture is going to save you from lying virtual hardware."

A virtual machine is an appropriate gift for all occasions: It's the "it" factor of IT these days, right? Dzuiba speculates that too much is being loaded onto virtual machines, thus bringing them to their knees. "VMs have their place, sure, but they are by no means the solution to every hardware problem. In my experience, you should use VMs for Web application servers; offline data processing; squid/memcache servers; and one-off utility computing."

There's still a place for hardware in this world, Dzuiba reminds us. "It's much more business-efficient to throw money at performance problems than it is to throw code at them, but I guess some of you guys just really like to type."

Topics: Servers, Amazon, Cloud, Hardware, Outage, Software, Virtualization

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • enterprise architecture

    really has no clue about cloud computing.

    I was in an Enterprise Architecture conference a couple months ago. Those EA gurus cannot agree with each other about basic EA strategies. Just fun to watch.
    FADS_z
  • RE: Exploding three 'myths' about cloud-based development

    I don't really agree with your post in many regards. We had a couple of application servers go down in the amazon outage, but we were able to continue running our application because we weren't naive about what we did... We actually applied enterprise architecture lessons, rather than what passes for enterprise architecture and had our app online again within an hour.<br><br>The take away lesson here is that the cloud is not magical, it won't give your application enterprise architecture juice just by using it. And like any managed hosing environment, the entire SLA matters, not just the 99.999% at the top. <br><br>1. SLAs ARE in fact meaningful. Read them with care. The Amazon SLA didn't offer any kind of stipulations about how they would communicate about the issue. The Amazon SLA was designed to protect them, we knew that up front because we read the whole document and not just the number at the top. The entirety of the document is what's important, not the number. You can't rely solely on an SLA, but don't for a minute claim that it's not a meaningful document. It is. <br><br>2. If you architect your application to be resistant to failures of one particular node, and such that you can just start up more instances in another availability zone or even cloud service, you have effectively used enterprise architecture to protect yourself. This means make as much as possible stateless, have multiple database servers, use Amazon's snapshoting capabilities, use a standard configuration for your application servers, and have hot spares of everything! <br><br>3. I agree with this. Sometimes you need physical hardware.

    "There?s still a place for hardware in this world, Dzuiba reminds us. ?It?s much more business-efficient to throw money at performance problems than it is to throw code at them, but I guess some of you guys just really like to type.?"

    This is not always true. It's the job of a project manager to evaluate the difference in cost. Making this blanket assumption is stupid. Let's say you have an algorithm that operates in O(N^3) time that is your application's bottle neck, you currently have 10,000 records as your input to that algorithm, but over the next year expect to have 10,000,000 records as input. Let's say you can solve the same problem with an algorithm that operates in O(NlogN) time. The IT manager would then in turn calculate how many hours it would cost to code the new solution and what the difference in infrastructure cost is to support the two (there's a reason for algorithm analysis and profiling after all, we don't do it for fun!). Here is how that works:

    10,000^3 = 1,000,000,000,000 steps. (currently)
    10,000,000^3 = 10^21 steps (if you meet proforma)

    If you replace that previous algorithm with the new one:

    10,000,000 * log(10,000,000) = 70,000,000.

    Switching out the algorithm will allow you to keep running for 1 year or more with the same or less hardware that you currently have. I know you probably understand this kind of example, but some people don't which is the problem with that blanket statement.
    snoop0x7b