Stratus asks if continuous availability possible in the cloud?

Stratus asks if continuous availability possible in the cloud?

Summary: Stratus' Dave Le Clair talks about the planning that is needed to take cloud computing all the way up to continuous availability.

SHARE:
TOPICS: Cloud
2

Dave LeClair, Senior Director of Cloud Strategies at Stratus Technologies, dropped by a while ago to discuss how the company was doing and share some thoughts about cloud computing and availability. As always, it was interesting to think about a topic not always mentioned in cloud computing implementations — workload availability.

Quick Stratus Technologies update

LeClair started the conversation with a quick review of how Status Technologies faired since our last conversation. The following bullets summarize that portion of the conversation:

  • Stratus' hardware business is doing well. The company recently released the 6th generation of Intel-based ftServer. The ftServer 2700, 4700, and 6400 systems deliver up to 4 times the performance of previous-generation systems while maintaining the capability to deliver 99.999% uptime. That means that customers can expect an average of no more than 0.4 seconds of downtime per day, 26 seconds of downtime per month and 5 minutes, 15 seconds of downtime per year. LeClair pointed out that these new systems feature Intel Xeon® E5 processors and integrated networking technology.
  • The company's software business is evolving as the capabilities of its Avance and everRun (formally a Marathon Technologies product) are being merged into a single future product.

Stratus suggests cloud computing users design for outages

The conversation then turned to Stratus' current discussion points that center on how cloud computing users were best served if they consider outages in processing, networking and storage during the design phase of their cloud workloads.

As expected, he pointed out recent high-profile examples of outages experienced by the major suppliers of cloud infrastructure, platform and software services. Some customers were hurt because they didn't consider where and how their workloads would failover to use other resources when a failure occurs. The key point LeClair wanted to get across is that customers should really thinking about the business requirements for each of their cloud workloads and where redundant hardware and software must be deployed to address potential outages.

To that end, LeClair discussed Stratus' work with the OpenStack community to help implement what he described as "software defined availability."

Snapshot analysis

Stratus is one of the few computer companies still standing that offers hardware and software based continuous processing/non-stop/fault tolerant computing environments. Most suppliers have chosen to implement availability in layers of software using various types of processing, storage and network virtualization technology. While Stratus makes it possible to use those types of technology, they are one of two remaining suppliers of fault tolerant hardware — systems that have redundant components built in and firmware-level failover control.

While using software redundancy is fine for stateless, Web-based application architectures, it isn't always the best choice for traditional commercial workloads that simply can not be seen to fail. It can take too long for processes to restart or transactions can be lost during the failover period. In that case, Stratus would point out that fault tolerant hardware is the best choice.

I've always thought Stratus had a number of very good points when they talk about planning for availability along with planning for needed capacity, performance, management and security. I've been known to suggest Stratus to clients when it appeared that they were just accepting the availability claims made by processing virtualization suppliers, such as Microsoft, VMware and Citrix, when workload migration was discussed.

Stratus really needs to make more noise about this topic as organizations increasingly look to the cloud. Unfortunately, the companies messages are being drowned out by those  offered by bigger competitors.

Topic: Cloud

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • Wonderful. Too bad the infrastructure isn't as good.

    So their service is 99.99995% reliable. Which I suppose qualifies for 6 sigma reliability, and is probably better than what you can acheive in-house. Can they do it cheaper than you can? Can you dump them and go with a different service or did you get locked in? And if you're locked in, are you at their mercy (non-existent, this is a business environment) for any price increases?
    You still need power. Here in southern NH, that's about 98.0822% reliable.
    You still need connectivity. Good luck trying to find out what your ISP reliability rate is for any time or location. Let's just assume an average reliability rate of 96%. By the way, if your power is out, it's probably out for your ISP too.
    Dr_Zinj
  • Now for a comment ABOUT the topic.

    Unless someone develops a system of "automated clairvoyance", any system that is designed to work with input data, software to process the data, temporary working storage, and output data at a remote location and NOT in the device in question CANNOT OPERATE during an outage of the physical, software, or politico-economic (i.e. government-ordered disconnection or failure of Company A to pay bill to Company B) linkage between those two points or any intermediate points (unless an alternate route exists).

    If the client device has a copy of at least a recent version of the data required, and at least a "crippled" version of the software, some partially useful work can be done, but that work's output may have to be revised ("synced") when connections are restored to normal. But all the hoopla about "cloud computing" ignores that reality. Keep nothing on your client device! Link to the software on a server, don't install any (except a browser) on your machine! Keep the most up-to-date, and ONLY copy, of your data on a server! The internet and all the servers you need will ALWAYS be available, so what's the problem?

    That goes back to pre-PC and pre-Mac days (might I suggest the abbreviation "POC", or "personally operated computer," for any device resembling a desktop or laptop PC or a desktop or laptop Apple device? If we adopt this generic term, we can save a lot of space) when the ONLY place to put large amounts of storage (often less than what a typical POC contains today) and programming of a useful complexity was in a mainframe, or at least a minicomputer (the latter often being uses as a room-sized POC). All interfacing was in text (punch cards, teleprinter on phone line, or CRT on phone line) or audio response on a phone line (tones in, synthesized voice out). Even the predecessors of email were stored on the server (called a "host" back then) and only viewed or printed at a remote terminal.

    Do we really want to go back to those days? Sure, the internet is available at rates that are connect-time-insensitive, compared to long distance phone dialing, but it is NOT always available everywhere you want to go; not even always in your home or office (cables get cut, servers go out, ISP bills go past due), and not always to your partner in a particular transaction (the above could happen to them also). That advanced processor and hard drive or SSD was put in your machine for a reason, and it is a waste to use it only as input and output for a program running on someone else's machine! That is like buying a Ferrari and towing it with a team of horses!
    jallan32