'Should we move our business processes into the cloud?' That is the big question hanging over the head of many — if not most — CTOs, CIOs and IT managers. Further complicating the issue is the vague definition of the cloud — is it Software as a Service (SaaS), is it AJAX-based applications, is it hosted management solutions, is it hosted applications? The truth is that the cloud is all of that and much more. The cloud has transformed into an ecosystem of applications and services that potentially could address any business technology requirement.
The easy part is identifying what's available — after all, the number of service providers grows on a daily basis and most applications are already serviced by a number of providers. The hard part is deciding whether or not to make the move to the cloud — a decision that should not be taken lightly.
Make no mistake: moving to the cloud is primarily a business decision. As such, it should be judged using the same criteria used for any new business process. Ultimately, the final decision will be based upon the metrics of ROI (Return On Investment), performance, sustainability and suitability to task. The best way to determine the overall suitability of cloud-based services is via a pilot project, where lessons learned can become part of the overall decision process and be used to build a template of how to proceed with future projects.
IT managers will need to do a little homework before launching that pilot project, as there are several tasks that should be performed before making the leap. Some of these fall under 'business best practices', while others fit more into the realm of technical analysis.
Managers should be prepared to do the following:
- An audit of the target applications and business processes to create a cost-benefit-risk analysis that compares a traditional client/server solution to a cloud-based solution.
- An audit of the cloud services provider, including an assessment of geographic redundancy, packet transport performance, latency and service guarantees.
- An audit of the business's own ISPs, including performance at connecting points, failover capabilities and guaranteed throughput rates to and from the cloud services provider.
- Monitor and frequently evaluate service and performance elements.
Identifying a cloud migration strategy is no easy task. IT managers will have to take many things into consideration to determine what process or service to tackle first. Questions asked should include:
- How many users will be impacted?
- Will any customisation be needed?
- What infrastructure is in place to support a cloud service?
- Will any equipment upgrades or changes be needed?
- What level of availability is required?
- Will this be a single-site or multi-site pilot project?
- What level of integration is required?
- Will staffing levels be impacted?
Although this is a good start, there are likely to be other questions that depend on the intricacies of the specific business practice affected. Those questions will usually emerge from discussions with users, management and technical staff.
For most businesses, the next decision comes down to selecting an external host, or deciding to self-host. Self hosting delivers the ultimate in control, but can also be expensive. Those wishing to self-host may have to invest in the complete infrastructure, development tools and staffing to build the solution. Self hosting may be a good option for a business seeking to move a highly customised application into the cloud for access by multiple offices or a mobile workforce. But beyond that, self hosting is generally too costly for traditional IT services.
For many businesses, the first services to move to the cloud are email, customer relationship management (CRM) and IT managed services. Although each of those cloud solutions have their own unique challenges, it's safe to say that many of the metrics and measurements that can be used are much the same.
Service Level Agreements
One of the first steps for choosing a cloud service provider is to evaluate the level of service offered and the guarantees behind that service. That information is contained in a Service Level Agreement, or SLA. Evaluating SLAs is a thankless task for many IT managers, as most are filled with legalese and contractual language that can make it difficult to quantify exactly what a vendor is offering.
A further complication is that most SLAs are written to protect the vendor rather than the customer. SLAs are often used to provide service providers with a defensive shield against litigation, while offering customers minimal assurances. That said, SLAs can still be a powerful tool for IT managers looking to choose a cloud provider and arrange for the best available services.
IT managers need to focus on three areas with SLAs: data protection, continuity and costs. Arguably, data protection is the most important element to understand. IT managers will need to make sure that it's clearly defined who has access to the data, and what protections are in place. At first sight, determining protection levels seems straightforward, but there are some hidden issues to be aware of and IT managers must perform due diligence by addressing those issues.
It all comes down to who ultimately has control of the customer's proprietary data, and how that data is accessed. SLAs should outline the vendor's infrastructure and what services are used to provide persistent access to the required applications and data sets. No vendor will guarantee access 100 percent of the time, simply because there are issues beyond their control and some maintenance chores will require downtime. At best, most service providers offer an assurance of 99.5 per cent uptime. Even so, IT managers will need to ask 'what happens if service is interrupted?'
When evaluating an SLA it's helpful to ask these questions:
For outages ask:
- How is a service outage defined?
- What tools are in place to determine the severity of the outage?
- How is the customer credited or compensated for an outage?
- What level of redundancy is in place to minimise outages?
- Will there be a need for scheduled downtime?
- What alternative methods of access are offered if there is an outage?
- Is there an incident-reporting system?
- Are access/usage reports available?
For security ask:
- How is the data encrypted?
- What level of account access is present and how is access controlled?
- Is the data always contained only on the vendor's systems?
- Does the vendor use any subcontractors or rely on any partnerships to process the data?
- Is the data backed up — and if so, where are the backups stored?
- Does the vendor use a secure datacentre?
- What happens to copies of the data if the relationship is terminated, or if the vendor fails?
- Will the vendor provide archival copies of the data to the customer?
- How will the vendor react to legal inquiries about a customer's data set?
- What types of auditing tools are available?
- How are compliance needs addressed?
When it comes to costs, ask:
- What is the fee structure?
- Are there hidden costs?
- Are there add on costs or fees for support?
- Are charges based upon traffic, usage or storage limits?
- Are there taxes or other external fees?
- Is there any type of price protection?
- Are there licensing fees above and beyond the service fees?
With these questions answered, the selection of a cloud services provider should become easier, and the stage set for expectations and costs surrounding the chosen service.
Measurement & performance
Validating vendors' answers and demonstrating that goals have been achieved requires monitoring and performance measurement. Although determining the overall impact of a service on business processes may be difficult because of the human element, performance proves to be rather straightforward thanks to the tools and services that are readily available to the modern enterprise.
IT Managers can turn to the Keynote Internet Testing Environment (KITE) and Internet Health Report to measure performance. Keynote maintains more than 3,000 servers and PCs at 200 sites in 59 countries, which are used to monitor real-world internet performance for the Internet Health Report. The performance metrics are based upon actual traffic on the web. Administrators can use Keynote's services to monitor uptime, latency and packet loss.
More specific information is available thanks to KITE, a desktop application that can be used to monitor the performance of individual web sites. Combining the information from the Internet Health Report with the metrics uncovered by KITE should give a good indication of the performance offered by a hosted service provider and should help to pinpoint any bottlenecks. Keynote offers most of its services for free, aiming to entice users to try the company's more advanced paid-for products.
Of course, Keynote is not the only game in town, but finding other players will require narrowing down the performance metrics to monitor. For example, DynaTrace offers a suite of cloud performance monitoring tools, although the company's toolset is aimed squarely at the Java and .NET crowd and is used to measure application performance. Another testing option comes from CapCal, which offers tools that simulate user access to cloud-based applications to measure performance under varying loads. CapCal offers a 20-user stress test service for free, while more advanced tests come at a price.
Of course, administrators can develop their own monitoring and testing tools, or rely on the tools provided by the cloud services vendor. For example, most administrators will find testing cloud-based storage providers relatively straightforward, simply by creating batch files that measure the speed of file uploads and downloads.
The trick to testing a cloud services provider comes down to knowing what to measure. For most, that can be defined as packet transmission speed, packet loss and response latency — all of which can be determined using tools that monitor traffic. In some cases, especially when voice or video data is involved, administrators may have to measure elements such as jitter, frame rates and throughput.
Ideally, most administrators will turn to a combination of tools to measure performance for their specific cloud implementation. The important thing to remember is not only to measure, but also to compare those performance elements against the system that was replaced and to report on the results frequently.