Neverfail 6.6 targets affordable continuous availabiltiy

Neverfail 6.6 targets affordable continuous availabiltiy

Summary: Neverfail Group announced a product update that adds important capabilities to its product portfolio. The company must now help the market understand why Neverfail and not some other approach.

SHARE:

Neverfail LogoNeverfail just released the newest version of their high availability product, cleverly called "Neverfail 6.6". The focus was to help organizations include continuous availability in their overall IT infrastructure. I've spoken with them quite a number of times over the years and am always impressed with the company's ability to offer software that implements a non-stop or doesn't-fail computing environment.

Here's what Neverfail has to say about version 6.6

Neverfail®, a leading global software company specializing in affordable continuous availability and disaster recovery solutions, today announced Neverfail version 6.6. Neverfail provides a single software-based solution for any Windows application that enables end users to stay continuously connected to business critical applications. This latest release extends the Neverfail Continuous Availability Suite to support the following features:

  • Virtual Availability Director: Enables vSphere administrators to manage their Neverfail servers from inside the vSphere management console. This advancement provides companies with a single interface for managing virtualization and availability together, simplifying management in virtualized environments.
  • SRMXtender:Extends VMware Site Recovery Manager (SRM) to provide support for tier 1 servers requiring near zero data loss and absolute minimum downtime. This includes support for applications running in physical, virtual and heterogeneous environments in addition to running different hypervisors. This allows site recovery plans to include more than just VMware guest machines and include the broader infrastructure.
  • Site-level Disaster Recovery: Adds the ability to failover or switchover all the applications running at one site with a single click. Site-level controls make it possible to quickly and easily failover to a secondary site should a disaster occur, as well as efficiently perform a switchover and switchback to support routine maintenance or scheduled site level downtime. By providing site level control of planned and unplanned downtime, customers can benefit from much greater flexibility to manage their complex IT infrastructures.
  • Business Applications: Simplifies the setup and management of business applications. Business applications are a group of servers that provide a single business process such as messaging or a SharePoint web farm. The relationship between servers, which control the order in which they failover, is now even easier to configure and use.
  • Systems Management: Supports monitoring of Neverfail servers and protected applications from the Microsoft System Center Operations Manager (SCOM) console. Also includes support for Simple Network Management Protocol (SNMP), which means administrators can now integrate Neverfail with their systems management tool of choice, such as HP OpenView, IBM Tivoli or CA Unicenter.

Neverfail version 6.6 is a free upgrade for current Neverfail customers using previous releases of version 6.x software. For customers who upgrade from previous versions the new Site Level Management, Virtual Availability Director, System Center Operations Managersupport and SNMP support are all free of charge. For new customers, Neverfail is priced per server on the application being protected for physical environments; for virtual environments it is priced per host socket. SRMXtender is priced at $2,995 per protected site for both existing and new customers.Neverfail version 6.6 is available now. Existing customers, prospective customers and partners interested in more information should contact Neverfail at <a href="sales@neverfailgroup.com.

Snapshot analysis

High availability is a rather tricky business. It can be accomplished in a number of ways and at a number of levels of the Kusnetzky Group Model of virtualization technology (see Sorting out the different layers of virtualization or Virtualization: A Manager's Guide (O'Reilly Media) for more information on the model.) Let's look at each of them in turn, starting at the top layer of the model:

  1. Multiple instances of an application could reside on different systems (physical, virtual or cloud) that are accessed using access virtualization technology, such as that offered by Citrix or Microsoft.  If one instance fails, staff can simply connect to a surviving instance. Since this approach is not automatic, it really can't be thought of as "continuous."
  2. A properly designed workload could integrate server-side application virtualization's workload management function. As with the first approach, multiple instances of the application logic could be executing on different systems (physical, virtual or cloud). If one instance fails, the workload management functions could resubmit task requests to another instance. While this approach appears continuous to staff, it does require IT developers to have used the application virtualization features properly or failover logic might not execute properly.
  3. Several forms of processing virtualization could be utilized to create a single system environment based upon a number of independent systems (physical, virtual or cloud).
    • Single system image clustering products, such as HP's OpenVMS Cluster or Oracle Solaris Cluster could be utilized, if the application was developed on the appropriate hardware/operating system combination. This approach would not be workable for Windows-based applications.
    • HA/Failover software, such as that offered by Symantec Veritas or Microsoft would allow applications to be developed that failed over without losing staff-inputed data. As with several of the other approaches, applications must be archtected properly for continuous availability to really work.
    • System virtualization products such as Stratus' Avance or Neverfail Group's Neverfail 6.6, offer similar capabilities, but everything is totally automatic. Developers simply get continuous availability as a result of using the product.
    • A combination of virtual machine software, virtual machine movement software and orchestration/automation software could be used to prevent a virtual machine image from failing. Virtual machines could be moved to another machine if the current host starts to fail. This approach may respond too slowly to meet the requirements of some critical applications.
  4. Organizations having a requirement for continuous availabiltiy that executes at hardware speeds could install hardware, such as that offered by Stratus, that use redundant hardware inside of a single computer rather than separate computer systems to create a extremely reliable environment. This approach also offers a few other benefits — since these configurations are really a single computer rather than a cluster of computers  a single operating system, application framework, database engine and other server-side software components need be purchased. Furthermore, only a single system need be managed by IT.

What's really confusing for most business decision makers is similar results can be offered through the use of so many different types of technology. All of the suppliers are using similar language to describe their products. IT has to really understand how each of these approaches work to select the best approach. Unfortunately most IT departments have neither the time nor the staff expertise, so an approach is typically selected based upon which vendor has the best marketing or has reached the decision maker first.

Neverfail Group offers a simple approach that would be useful to many organizations, if they knew it existed. The key to product selection is knowing what is avaialble; the strengths and weaknesses of each approach; and which approach would be most cost effective in a given environment. Neverfail has its work cut out for it to succeed in the market. The technology is good and has been successfully deployed worldwide. Will a given IT decision-maker know anything about Neverfail and its products? That's another question.

---

Note: Several of the suppliers mentioned in this article subscribe to Kusnetzky Group services. Their products and services are mentioned based upon the product capabilities rather than any relationship Kusnetzky Group has with the supplier.

Topics: Outage, Cloud, Virtualization, Symantec, Software, Servers, Microsoft, Hewlett-Packard, Hardware, VMware

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

5 comments
Log in or register to join the discussion
  • RE: Neverfail 6.6 targets affordable continuous availabiltiy

    fguyhetr
    fgjtyhdgfsd
  • RE: Neverfail 6.6 targets affordable continuous availabiltiy

    http://url.52lianjie.com/9

    http://url.52lianjie.com/9

    http://url.52lianjie.com/9
    fgjtyhdgfsd
  • Sounds like a crude implementation of what Tandem Non-Stop was doing 35

    years ago, except, the Tandem system was a true non-stop, fault-tolerant platform.
    adornoe
    • RE: Neverfail 6.6 targets affordable continuous availabiltiy

      @adornoe@... The thing with fault-tolerant platforms is that, as the author points out, is that "these configurations are really a single computer rather than a cluster of computers." If there is a problem within that computer then the FT system provides no benefit... the broken app/OS on the running computer still needs to be repaired or recovered. Neverfail's approach is more akin to a clustered solution, with two similar computers being able to hand the active application role back and forth freely.
      JoshMaz
      • The Tandem system was a single computer, and it was also capable

        of being networked for more fault tolerance across a huge network. <br><br>A single Tandem computer would have fault tolerance within the same box and extended to many boxes.<br><br>Read up (from Wikipedia): <br><br> <a href="http://en.wikipedia.org/wiki/Tandem_Computers" target="_blank" rel="nofollow">http://en.wikipedia.org/wiki/Tandem_Computers</a><br><br>"Tandem Computers, Inc. was the dominant manufacturer of fault-tolerant computer systems for ATM networks, banks, stock exchanges, telephone switching centers, and other similar commercial transaction processing applications requiring maximum uptime and zero data loss. The company was founded in 1974 and remained independent until 1997. It is now a server division within Hewlett Packard.<br><br>Tandem's NonStop systems use a number of independent identical processors and redundant storage devices and controllers to provide automatic high-speed "failover" in the case of a hardware or software failure.<br><br>To contain the scope of failures and of corrupted data, these multi-computer systems have no shared central components, not even main memory. Conventional multi-computer systems all use shared memories and work directly on shared data objects. Instead, NonStop processors cooperate by exchanging messages across a reliable fabric, and software takes periodic snapshots for possible rollback of program memory state.<br><br>Besides handling failures well, this "shared-nothing" messaging system design also scales extremely well to the largest commercial workloads. Each doubling of the total number of processors would double system throughput, up to the maximum configuration of 4000 processors. In contrast, the performance of conventional multiprocessor systems is limited by the speed of some shared memory, bus, or switch. Adding more than 48 processors that way gives no further system speedup. NonStop systems have more often been bought to meet scaling requirements than for extreme fault tolerance. They compete well against IBM's largest mainframes, despite being built from simpler minicomputer technology.<br><br>Besides fault tolerance and scaling, NonStop machines also featured an industry-leading implementation of a SQL relational database, and industry-leading support for networking and for geographically dispersed systems."<br><br>So, it seems that the Neverfail system pales in comparison to what Tandem systems was able to do decades ago, and Neverfail still has a long way to go before they can be truly fault tolerant in the same scope as Tandem systems of the past. I shouldn't be calling them systems of the past, since, there are many Tandem systems still in operation today, and HP (which bought Compaq, which had bought Tandem) has continued with many of the same ideas, but with the technology of the present.
        adornoe