X
Innovation

Optimizing existing facilities for cloud computing

The most significant opportunities can be divided into three key areas: employing a high-density system configuration, optimizing the power architecture for high availability and managing complexities through infrastructure management.
Written by Ryan Huang, Contributor

Recognizing vulnerabilities in existing facilities before embarking on a cloud computing deployment offers an opportunity for potential cloud computing adopters to fortify critical systems.

This is done primarily through the optimization of the end-to-end infrastructure with fault-tolerant systems – including power, precision cooling and infrastructure management – to sustain the highest possible levels of availability and service while reducing operating costs and management complexity, according to Emerson Network Power.

The most significant opportunities for facility optimization can be divided into three key areas:

• Employing a High-Density System Configuration

• Optimizing the Power Architecture for High Availability

• Managing Complexities through Infrastructure Management

Employing a High-Density System Configuration

As the complexity of cloud computing continues to grow inline with demand, the use of blade servers allows enterprises to dramatically increase compute power in the rack, said Wesley Lim, director of DCIM (Asia) at Emerson Network Power. "As blade server usage increases, high-density rack configurations have become a best practice for enterprise data centers considering a cloud computing architecture."

A number of factors make high-density configurations an attractive option for enterprises, particularly when paired with a cloud computing architecture. The implementation of a high-density configuration increases both the compute capacity and energy efficiency of an existing facility, particularly when current average rack densities reach well over 10 kW per rack, Lim added.

Ultimately, high-density computing enables facilities to grow upward rather than outward.

However, increased rack densities paired with high server-level compute loads can result in "hot spots" which, if not managed properly, can severely impact the availability of virtualized servers, said Arunangshu Chattopadhyay, director of Power Product Marketing at Emerson Network Power. "Therefore, enterprises must take steps to ensure that critical systems are backed with adequate cooling support optimized for virtualized, high-density environments."

Rack- and row-based precision cooling solutions are the ideal choice for cooling high-density architectures thoroughly and efficiently, noted Chattopadhyay, who is also the company's head of Central Technical Support (Asia).

"The implementation of scalable precision cooling solutions facilitates the rapid deployment of high-density computing environments for most facilities, including both raised and non-raised floor data centers. By placing the cooling element close to heat sources – typically in the rack or rack row – enterprises can expect to achieve up to 50 percent energy savings over traditional perimeter cooling architectures," he explained.

Optimizing the Power Architecture for High Availability

Implementing a high-density system configuration to increase efficiency and performance is important, but it also is important for data center managers to ensure those energy savings and performance increases do not come at the cost of reduced availability.

According to Chattopadhyay, while "five nines" of availability is becoming increasingly attainable in cloud deployments – particularly IaaS architectures – many clouds still provide unsatisfactory levels of downtime. In order to achieve the highest possible levels of availability, data center managers should examine their existing power infrastructures closely to identify and eliminate single points of failure. When evaluating external cloud providers, the vendor's data center power infrastructure and availability strategy should also be examined and understood. A common best practice is to establish redundancy within the UPS architecture.

"For enterprises seeking to achieve scalability without impacting availability, N+1 redundancy remains the most cost-effective option for high availability data centers and is well-suited for high density cloud computing environments. In a parallel redundant (N+1) system, multiple UPS modules are sized so that there are enough modules to power connected equipment (N), plus one additional module for redundancy (+1). When executed correctly, redundant on-line UPS architecture enables the enterprise data center to achieve high levels of efficiency without compromising the availability needed for business-critical applications," he added.

Managing Complexities through Infrastructure Management

Many enterprises overlook the additional performance and availability benefits that can be achieved through the implementation of a comprehensive infrastructure management solution, said Lim.

By providing detailed analysis of a data center's performance over time, infrastructure management solutions enable cloud providers to develop accurate pricing models based on actual performance, to maximize profitability while remaining fully accountable to SLA, he explained.

To create an optimal cloud computing environment, data centers need to bridge the gap between the physical layer of the data center infrastructure (primarily comprised of power, cooling and facility resources) and the IT infrastructure (actual compute, storage and communications activity), Lim added.

"While infrastructure management solutions currently support the monitoring and management of the byproducts of IT infrastructure activity (increased power consumption, heat dispersal, etc.), the management of virtualized IT systems and the data center's physical infrastructure remain disjointed. This creates a critical vulnerability in enterprise data centers with cloud computing architectures," said the director of DCIM.

Intelligent capacity planning will ultimately enable enterprises to effectively aggregate and correlate real-time data across a data center's once heterogeneous IT and facility equipment, according to Lim.

Editorial standards