The impact of the changing datacenter

The impact of the changing datacenter

Summary: The confluence of many lines of technology are encouraging organizations to move from a physical to a virtual world. What are the likely impacts?

SHARE:
TOPICS: Cloud
0

We all recognize that the tech industry is moving away from physical systems and the software they run. We're now looking at a world in which processing, storage, and networking are seen as resources to be provisioned, orchestrated and then re-provisioned for the next task automatically.

After speaking with a number of industry luminaries, such as Andrew Hillier, CTO of CiRBA, and Boris Renski, Vice President and Co-Founder of Mirantis, it's clear that the tech industry has begun to think about IT resources in a different and far more flexible way.

Moving from office buildings to hotels

In the past, datacenters were viewed like office buildings. Space would be leased or purchased for workloads and then that space would be populated with systems, storage, networking, power and cooling equipment. Once the workloads had moved in, they were likely to stay in that space for a very long time. In some cases, decades. While individual components in that space would be updated with newer technology from time to time, the workload would stay in residence.

Now we're starting to view datacenters as hotels. Workloads check in, stay for a time, and then check in somewhere else (like another organization-owned datacenter or a datacenter offered by a cloud service provider). Work that used to live on-premise in one place might now be working in apartments in several cities, owned by many different organizations and move around as needed.

The impact to organizations

A combination of the use of many layers of virtualization (see Sorting out the different layers of virtualization for more information about the layers of virtualization) and the use of off-premise computing models is changing how organizations purchase and deploy processing, storage, networking, power and cooling assets. Let's consider some of the changes:

  • Systems/processing — Rather than purchasing systems that have features required by a single workload, organizations are moving to purchase inexpensive, easily extendable and replaceable systems. The combination of processing virtualization and orchestration software allows many workloads to use the available systems and move around as needed to optimize the use of those computing resources. This is likely to result in fewer systems being needed and reductions in both power consumption and heat production.

  • Storage — Storage virtualization technology makes is possible to use many different types of storage media and for the storage to be located on- or off-premise depending upon the performance and cost requirements of the workload. Data items will be automatically moved to the most appropriate storage media, compressed and de-duplicated to reduce the storage capacity required. This is also likely to result in fewer storage systems being needed and reductions in both power consumption and heat production.

  • Management — Applications that used to reside on a single system are now likely to be constructed of many services, replicated on many systems for reliability and scalability, and live across many datacenters. This means that management tools must have the ability to monitor processes on everything from Smartphones and tablets near the user to many systems in different datacenters. These management tools must collect a great deal of data quickly and with minimal overhead. They must analyze this data and learn what is normal and what is an anomaly. They must notify staff of anomalies and recommend (or recommend and execute) configuration changes to prevent outages or service interruptions.

  • Power and cooling — If an organization takes virtualization and cloud computing concepts to their logical conclusion, IT infrastructure should require less power, produce less heat and, of course, less cooling.

What to do with all of that datacenter space?

When touring datacenters, I've often noticed that facilities designed 10 or more years ago have quite a bit of empty space. Systems, storage, power and networking equipment have gotten far more powerful and smaller in the intervening years. This means that there is a great deal of unused real estate out there.

Should organizations add hosting, managed services and cloud computing to their offerings? Should they ask service providers to take over that space and offer services to other tenants to create an additional stream of rental revenue? Should they sell the building to a service provider and rent the space they're currently using?

These are complicated questions and the answers are likely to be different depending upon the focus of the company. What's your organization planning to do?

Topic: Cloud

About

Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

0 comments
Log in or register to start the discussion