Powering down excess server capacity may help businesses save on energy costs, but virtualization and cloud computing are giving enterprises even less of a reason to do so, according to an analyst.
Errol Rasit, principal analyst at Gartner, noted that virtualization technology has allowed "more dynamic movement of applications and workloads" where apps can be provisioned more quickly than on a physical server.
With the capability, organizations are increasingly balancing different workloads depending on the time of day to achieve better server utilization rates. For example, companies may choose to run batch processing applications overnight, he explained in a phone interview.
"During the night time around the world, this is when we get some of the lower-level applications working, so it's a more efficient utilization of equipment," said Rasit. "During the day, it may be your e-mail server, during the night it may be some sort of data warehousing…rather than have a dedicated server for both types [of workloads]."
Cloud computing also reduces the need to power down servers that may not be in use, he added. This is because an organization will purchase only enough capacity to power day-to-day workloads--if more is required due to specific projects, it will tap an "overdraft facility" in the form of a cloud provider or traditional service provider.
Aman Neil Dokania, Asia-Pacific and Japan vice president and general manager for infrastructure software and blades sales at HP Enterprise Business, pointed out in an e-mail that many servers such as HP's ProLiant and blade systems are, for example, Energy Star-certified and therefore more energy-efficient.
Rather than turn servers off, putting them on standby will "return significant energy savings without compromising agility when business needs arise", he said.
"We believe some customers would put servers [on] standby mode instead of fully powering them off as it is much faster to reactivate when the need arises," said Dokania. "Ultimately, it all depends on the criticality of application workloads and service level requirements of the business."
Organizations may, however, still choose to power down their servers, he added, explaining that such scenarios may include hardware upgrade exercises.
Consider shutdown for non-virtualized workloads
Gartner's Rasit also acknowledged that there is room for organizations to power off their servers. According to him, only about 20 percent of organizations globally have adopted virtualization. On top of that, "only half" of applications in the enterprise environment are "good candidates for virtualization and consolidation".
As such, enterprises may consider powering down servers running apps that are not virtualized or are unlikely targets for virtualization, either overnight or over the weekend, particularly if they do not expect high application usage during those shutdown periods, he said.
Rasit, however, warned that "with those sorts of plans, there is an element of risk. That's what the conservative end-user organizations argue--if [the server is] up and running, then it's available. If …you power the server off, then obviously if an application request came through, you'd have to power the server up and make sure the application was running correctly, and so response time is slower".
When a server is powered on and off "multiple times throughout the day or over the course of a month or year, there's going to be additional strain that you're adding to the server, which could lead to early failure of hardware components", he pointed out.
In addition, savings from the reduction in power and cooling costs as a result of switching servers off may not be significant if data centers are in the first place located in rural areas with comparatively lower energy costs, he noted.
Alex Tay, data center services executive at IBM Asia-Pacific, told ZDNet Asia in an e-mail interview that companies should focus on the workload demand rather than dedicate a certain time of day to shrink the server pool. This is to ensure that the enterprise infrastructure will be capable of handling sudden unplanned increases in load.
When it comes to a more dynamic infrastructure where companies have the ability to turn servers on or off, there is often added complexity in operation and management, he added.
"With new tools and control parameters, more skilled operators [are needed] to run the infrastructure," he said. "Clearly, such infrastructure will also cost more in capital expense to provide the added control and automation functionality."
Noting that the concept of turning on and off servers has existed for many years, Tay added that "many" IBM customers in the region have employed the server on-off capability in their virtualization projects with Big Blue.
Not option for RWS
Yap Chee Yuen, senior vice president and head for innovation and technology at Resorts World Sentosa (RWS), said in an e-mail interview that "it does make sense" to power down or put hardware in standby mode if server loads can be dynamically managed. "This will save energy costs or free up resources for other applications [and] is very similar to grid computing or the sharing of computing power across banks of computers," he pointed out.
However, he noted that the option would not be realistic for RWS as the integrated resort has applications with similar characteristics and hence workloads run at the same peak and off-peak cycles.
"For RWS, most of our critical apps are 24x7 and the workload of the apps has very much the same characteristics," he explained. "In other words, we do not have the highs of a group of applications corresponding to the lows of another group of applications, and therefore it is not feasible for RWS to adopt such an approach."