How your inefficient data center hampers sustainability - and AI adoption

The power demands of AI workloads will only exacerbate the challenges facing enterprises, says HPE's chief technologist in sustainable transformation.
Written by Eileen Yu, Senior Contributing Editor

Organizations are still choosing to over-provision their data center and IT requirements, which can get in the way of efforts to run these facilities more sustainably.

It is not uncommon, for instance, for enterprises to turn off features that make systems run efficiently, John Frey, chief technologist of sustainable transformation at Hewlett Packard Enterprise (HPE), told ZDNET. Frey explained that HPE ships its devices set to operate at their most efficient level, with power performance optimized. Unfortunately, customers often turn off the default setting as soon as they receive the new product, worried that it would deprive the system of the computing power it needs to run without lag.

Also: Apple is building a high-security OS to run its AI data centers - here's what we know so far

"So we can design our products to operate in the most efficient way and set it to do so as a default. The question is, how do we get customers to leave it [running] that way," Frey explained to ZDNET on the sidelines of HPE Discover 2024

Frey noted that, most times, businesses operate their data center and IT infrastructures at 30% utilization, choosing to over-provision to ease their anxiety about these systems continuing to operate smoothly. These businesses end up with a hardware stack that is hyperefficient, but used inefficiently, he said. He added that a huge part of HPE's efforts go toward helping customers use its products more efficiently, which would in turn reduce the energy needed to power these systems. 

Customer education and a change in mindset play a big part in driving overall sustainability efforts, Frey said. For its part, HPE provides white papers and case studies, including adoption frameworks that guide customers through the change.

Frey noted that metrics and analytics also have a role in quantifying the returns for businesses, be it in terms of dollars, risk reduction, enhanced resilience, reduced carbon emission, or cybersecurity benefits.

Also: Singapore keeping its eye on data centers and data models as AI adoption grows

While regulations and mandating some of these operating standards can help drive adoption, these should be rolled out in collaboration with the industry and user community, Frey said. This will ensure there are no unintended consequences, such as poor performance in other areas. Policies that lead to such unintended results may compel companies to move workloads out of the regulating country, which is not what the government wants in setting out these mandates, he added.

Asked about key barriers to building more sustainable data centers, Frey noted that the one-size-fits-all strategy no longer works, particularly amid the anticipated spike in artificial intelligence (AI) workloads. Facilities that power AI applications will likely need liquid cooling to maintain or improve energy efficiency for these compute-intensive environments. On the other hand, tapping ambient air may be sufficient to cool the insides of data centers running more general-purpose applications, Frey explained. 

Also: Business sustainability ambitions are hindered by these four big obstacles

Efforts to address higher temperatures, such as Singapore's data center operating standards for tropical climates, are also better suited for traditional workloads, Frey said. As companies move toward higher rack power density with their adoption of AI, they will likely need to move to liquid cooling environments, he noted. 

Eventually, Frey believes, most data center operators will move in the same direction, as newer more powerful processors generate more heat since they are capable of handling more tasks.

The average IT rack used to run at between 3 and 5 kilowatts (kW). This has been growing in the past decade to more than 20kW for mainstream computing workloads, he noted. Power requirements go up further with racks that run AI workloads or train models, hitting more than 50kW per rack. Ambient air alone is not sufficient to cool such environments, driving the need for liquid cooling, Frey said.

Also: Global tech spending expected to keep climbing on AI demand

In fact, demand for liquid cooling has driven the data center thermal management market to $7.67 billion, according to tech research and advisory firm Omdia. It is expected to climb at a compounded annual growth rate of 18.4% to $16.8 billion in 2028, fueled by the adoption and development of AI.

In particular, liquid cooling has seen significant growth in China and North America, Omdia said. "The data center thermal management is advancing due to AI's growing influence and sustainability requirements," the research firm noted. "Despite strong growth prospects, the industry faces challenges with supply chain constraints in liquid cooling and embracing sustainable practices."

Omdia added that the integration of AI-optimized cooling systems, strategic vendor partnerships, and ongoing push for energy-efficient and environmentally friendly solutions will shape the industry's development.

Editorial standards