People like the idea of cloud-bursting — extending the capacity of a datacentre using external cloud resources. But extending resources is one thing; using them more effectively is quite another, says Lori MacVittie
The ability to perform on-demand cloud-bursting successfully has been attracting a lot of attention. The concept is not new. It amounts to using the resources of a secondary or tertiary datacentre to extend the capacity of an application deployed in the primary datacentre.
But the ability to perform such a feat in real-time has been problematic because of the challenges associated with rapidly provisioning applications into external resources. The nature of wide-area networks is such that latency and less-than-optimal conditions had previously made it difficult if not impossible to deploy a virtualised application in real-time.
Now that we've addressed that concern and understand what's necessary to perform such a task successfully, we need to look past simply extending datacentre resources into external environments to how we might more effectively use those resources.
The foundation of cloud-bursting provides us the real-time deployment capabilities necessary to build out more intelligent architectures. Such architectures would exploit compute power across multiple cloud-computing environments to capitalise on the benefits allegedly afforded by cloud computing.
Balancing business with operations
Ultimately the goal of cloud computing should be to enable a more flexible and cost-effective architecture in a way that enables IT organisations to meet not only operational goals but business requirements.
Those business requirements — such as cost per transaction and the implementation of specific functionality — often clash head on with operational goals related to performance and availability. Indeed, the need to maintain user connectivity — and thus productivity — led to the extension of the HTTP protocol to include the ability to carry state information in cookies, which increased the amount of data traversing the network and required special attention to persistence-based networking in datacentre architectures.
The implementation of the architecture needed to support such a common scenario often conflicts with performance goals because persistence-based networking can be computationally costly in terms of milliseconds. Eventually the milliseconds required to maintain application-layer connections to specific application instances add up.
What will be necessary in the future to balance business requirements with operational goals are more intelligent architectures which, of course, require more intelligent datacentre components.
To balance capacity and performance needs with business requirements across all applications, we will need the ability to evaluate every request in the context of the user, the networks and applications involved, and the business requirements that must be met.
To meet those business requirements it may be necessary to send one request to the local datacentre while another is directed to a cloud-computing deployment. In some cases it may be necessary not only to direct a request to a cloud-computing deployment but to a specific cloud-computing deployment based on its physical location.
We will need to be able to distribute requests across multiple application deployments in real-time based on the context of each request, such that we can comply with jurisdictional restrictions and performance-related requirements while maintaining as low a cost per transaction as possible.
Strategic point of cloud control
To achieve such an intelligent, context-aware architecture comprising external and possibly internal cloud-computing models, there will need to remain a strategic point of control for the IT organisation through which such decisions can be made.
Simply deploying an application into an external cloud is not enough. There needs to be a gatekeeper — a director of traffic — that is capable of taking into consideration the full context of every request to make the right decision at the right time — on-demand, across clouds and across datacentres.
Cloud-bursting lays the foundation for such architecture by...