Why the cloud-bursting outlook is unsettled

Extending a datacentre using the cloud is an appealing approach, but it's still in its infancy, says Lori MacVittie
Written by Lori MacVittie, Contributor

People like the idea of cloud-bursting — extending the capacity of a datacentre using external cloud resources. But extending resources is one thing; using them more effectively is quite another, says Lori MacVittie

The ability to perform on-demand cloud-bursting successfully has been attracting a lot of attention. The concept is not new. It amounts to using the resources of a secondary or tertiary datacentre to extend the capacity of an application deployed in the primary datacentre.

But the ability to perform such a feat in real-time has been problematic because of the challenges associated with rapidly provisioning applications into external resources. The nature of wide-area networks is such that latency and less-than-optimal conditions had previously made it difficult if not impossible to deploy a virtualised application in real-time.

Now that we've addressed that concern and understand what's necessary to perform such a task successfully, we need to look past simply extending datacentre resources into external environments to how we might more effectively use those resources.

The foundation of cloud-bursting provides us the real-time deployment capabilities necessary to build out more intelligent architectures. Such architectures would exploit compute power across multiple cloud-computing environments to capitalise on the benefits allegedly afforded by cloud computing.

Balancing business with operations
Ultimately the goal of cloud computing should be to enable a more flexible and cost-effective architecture in a way that enables IT organisations to meet not only operational goals but business requirements.

Those business requirements — such as cost per transaction and the implementation of specific functionality — often clash head on with operational goals related to performance and availability. Indeed, the need to maintain user connectivity — and thus productivity — led to the extension of the HTTP protocol to include the ability to carry state information in cookies, which increased the amount of data traversing the network and required special attention to persistence-based networking in datacentre architectures.

The implementation of the architecture needed to support such a common scenario often conflicts with performance goals because persistence-based networking can be computationally costly in terms of milliseconds. Eventually the milliseconds required to maintain application-layer connections to specific application instances add up.

What will be necessary in the future to balance business requirements with operational goals are more intelligent architectures which, of course, require more intelligent datacentre components.

To balance capacity and performance needs with business requirements across all applications, we will need the ability to evaluate every request in the context of the user, the networks and applications involved, and the business requirements that must be met.

To meet those business requirements it may be necessary to send one request to the local datacentre while another is directed to a cloud-computing deployment. In some cases it may be necessary not only to direct a request to a cloud-computing deployment but to a specific cloud-computing deployment based on its physical location.

We will need to be able to distribute requests across multiple application deployments in real-time based on the context of each request, such that we can comply with jurisdictional restrictions and performance-related requirements while maintaining as low a cost per transaction as possible.

Strategic point of cloud control
To achieve such an intelligent, context-aware architecture comprising external and possibly internal cloud-computing models, there will need to remain a strategic point of control for the IT organisation through which such decisions can be made.

Simply deploying an application into an external cloud is not enough. There needs to be a gatekeeper — a director of traffic — that is capable of taking into consideration the full context of every request to make the right decision at the right time — on-demand, across clouds and across datacentres.

Cloud-bursting lays the foundation for such architecture by...

...proving the technological capabilities required to duplicate and application in real-time in an off-site cloud computing environment and immediately integrate it into the datacentre architecture in a way that makes the increase of capacity appear seamless.

It provides IT organisations with the ability to increase capacity for a single application on-demand. By extending that concept laterally, across the application landscape, we can begin to implement a more holistic datacentre strategy that may take advantage of multiple clouds to extend datacentre capacity rather than just a single application's capacity.

Provision extended datacentre resources
Once the datacentre can be extended, it becomes necessary to determine how best to provision those extended resources in real-time to meet operational and business goals.

That determination, that control, will remain the purview of IT operations and will require the use of context-aware datacentre components that can be instructed on how to interpret the context to meet business and operational goals and integrated into a holistic control plane capable of managing a broad set of resources located across the globe.

Cloud-balancing will allow the organisation to exploit capacity and resources in multiple cloud-computing environments while maintaining control over their use. The same principles applied to achieve cloud-bursting capabilities can and will be extended to take context into consideration: costs, performance, availability, and regulatory compliance as a means to use multiple cloud-computing environments in real-time.

Cloud-bursting is undoubtedly appealing, but it is only the first step on a much longer more exciting journey towards virtually extending the datacentre using external cloud-computing resources that can be used intelligently to meet the ever-increasing demands of business stakeholders and application end-users.

Lori MacVittie is responsible for application services education and evangelism at application delivery firm F5 Networks. Her role includes producing technical materials and participating in community-based forums and industry standards organisations. MacVittie has extensive programming experience as an application architect, as well as in network and systems development and administration.

Editorial standards