/>
X
Business

Understand trends to cut datacenter costs, sprawl

IT administrators must recognize increasing operational costs, changing data types and "perfect storm" brewing in server world to better manage costs in transforming their datacenters, urges HDS exec.
Written by Kevin Kwang, Contributor on

IT administrators need to identify and integrate moving market trends into their data centers to better manage costs and resources while avoiding hardware sprawl, according to a Hitachi Data Systems (HDS) executive.

In an interview with ZDNet Asia Wednesday, Hu Yoshida, vice president and CTO at HDS, said key trends in this market include evolving IT costs and changing types of data being stored. He also described a "perfect storm" of factors he said was brewing in the server world.

Elaborating on IT costs in data centers, he noted that while hardware costs have remained flat for the past 10 years, overheads have been increasing at a rate of 78 percent each year. The spike can be attributed to operational costs, he said, pointing to the increased power supply needed to run and cool a data center as a key contributing factor.

And as business is conducted faster today, Yoshida noted that there is no room for any system downtime. For instance, for most companies, migrating applications or data from one storage box to another can be done only on weekends, resulting in an increase in manpower and energy expenditure, he explained.

"This is no longer a people problem but a scheduling problem," he said.

He suggested that this challenge can be alleviated through virtualization which he said supports data mobility, automation and dynamic provisioning. With virtualization, applications and resources can be moved within the virtual environment without disrupting business operations, he elaborated.

Pointing to the need to manage changing types of data being channeled into the data center, Yoshida said data volume, in general, is increasing at 66 percent CAGR (compound annual growth rate).

He noted it is more efficient to identify the various types of data being stored than to buy cheap boxes to store these data. "Buying more hardware simply adds to operational costs and datacenter sprawl but does not address the challenge of efficiently using available resources," said Yoshida.

For instance, he explained that "primary data", which is rising at 32 percent each year, is currently over-provisioned and the unused storage space is left unclaimed. This over-provisioning is compounded when copies of the primary data are created and the same amount of unused storage space is replicated. This creates a multiplier effect of increasing unused storage space, he said.

Having better insight to the types of data would allow IT administrators to recognize which data belongs in "tier 1" of the company's storage space, while less important information such as "unstructured and replicated" content can either be channeled to "tier 2 or 3" storage boxes or archived in the backend to free up storage space, he noted.

Yoshida also described a "perfect storm" of factors brewing in the server world, all of which affect the storage market in which the company plays. He outlined three factors responsible for the "storm": more powerful and efficient Intel architectures such as "multicores, simultaneous multi-threading and level three cache", the introduction of hypervisors that allow multiple virtual machines to operate in one file system and which result in I/Os having to scale rapidly, and the increased bandwidth between servers and storage to handle the data explosion.

As a result, storage vendors such as HDS will need to offer the ability to dynamically provision and support these changes in the server space, he said.

However, he noted that its competitors are looking to "scale out" rather than "scale up" to deal with the increase data coming through the data center and changes in the server space.

Scaling out, he said, mean modular and cheap storage boxes are loosely linked by switches that comprise two controller nodes, making scaling up rapidly a problem. Even if one node is 90 percent utilized but the other node is only at 10 percent, it is not possible to balance out the workload between the nodes in these switches, he noted.

HDS is looking to solve the problem by scaling up, which entails "throwing in more cache, processors and disks" and tightly coupling these with a global cache system.

Yoshida explained: "What we are trying to do is create a virtual pool of resources that are easily accessible and can dynamically provision for today's datacenter environment." He added that users of such a system can still scale out by partitioning needed resources from the centralized pool.

Lack of time, awareness a problem
According to the CTO, it is not enough for IT administrators to understand these datacenter trends. "The financial people also need to understand 'storage economics' so that they don't just throw money at the problem," he said, adding that buying more hardware is only a short-term measure.

IT managers should also be freed up from the "backend grunt work" in order to be equipped with the skills needed to transform the data center as well as having the time to plan and better manage their projects, Yoshida said.

"This lack of time for IT professionals to be equipped is a vicious cycle as virtualization of the data center will still fail if they do not know how to implement the system in the long run," he cautioned. "I think enterprises are starting to realize they are headed for a train wreck if they don't address this problem."

Editorial standards