Transforming the Datacenter

Start With What You Have

Start With What You Have

Summary: Last week I had a typical conversation with a very successful IT executive. I had just presented benefits of a cloud computing solution, and he was excited about it, but explained he wouldn’t be able to start for a couple of years.

TOPICS: Data Centers

Last week I had a typical conversation with a very successful IT executive. I had just presented benefits of a cloud computing solution, and he was excited about it, but explained he wouldn’t be able to start for a couple of years.

He had just signed a lease for more datacenter space, and he had tons of equipment on his books. He was worried that some of his critical applications might break if he made any sudden changes. His predicament is common, and so was his conclusion.

But that kind of thinking squanders a golden opportunity to reap early benefits from the cloud. So what do you do? You must start with what you have and build from there.

Here’s the catch. You need to know where to begin. I have said that from an internal management perspective, public and on-premises environments have a lot in common. Nonetheless, at any given time you will be consuming your services from a specific location, which could be an internal datacenter, a public service, or some combination of both. The question is, which you should choose?

The answer? It depends. There are some trade-offs to consider in this selection. Public clouds address many enterprise pain points very well. There are good reasons why many enterprises subscribe to a public infrastructure and platform service, such as Windows Azure. Public clouds relieve the organization of a large part of the physical administration and reduce the need for capital expenditures. This can mean quicker time to value.

However, public services also present a radical change, which doesn’t come without risks. Some corporations are reluctant to run sensitive applications over publicly accessible networks or pool resources with external entities.

On-premises environments present a simple solution to this dilemma. Enterprises can benefit from scalability, efficiency, and availability without radically altering their risk profile. Especially in cases where sensitive data is involved, regulatory compliance is difficult to maintain and data governance is strict. An internal implementation of cloud-scale technologies is often the only viable option to drive better efficiency and agility.

There are other factors to think about. Financial considerations may also favor an on-premises solution. Most enterprises already have significant capital invested in an existing datacenter, including hardware and software. The incremental cost of using these assets is low. Even when the time comes to procure new systems, a large corporation can often run these internally for a lower cost than the price of a public service.

Leveraging an existing datacenter is often simpler because it can be done with incremental changes in technology. Large companies are already familiar with the systems they have deployed throughout their infrastructure. Updating these systems with newer OS versions that include cloud capabilities like resource elasticity, greater scalability, and more efficient use of capacity injects cloud capabilities into the datacenter without introducing compatibility issues. This approach becomes an effective stepping stone to the future.

Cloud computing introduces new capabilities that foster business process changes that place even greater demands on the datacenter. It is unrealistic to expect to reach utopia overnight. One key to success is leveraging the assets you have in your existing datacenter. However, you can augment this strategy with other options as well.

Topic: Data Centers

John Rhoton

About John Rhoton

John Rhoton is a contributor to CBS Interactive's custom content group, which powers this Microsoft sponsored blog. He is a technology strategist who specializes in consulting to global enterprise customers with a focus on cloud computing.His tenure in the IT industry spans over twenty-five years at major technology companies, defining and implementing business strategy. He has recently led corporate technical strategy development, business development, and adoption of cloud services, datacenter transformation, mobility, security and next-generation networking, while also driving key corporate knowledge management and community-building programs.John is the author of six books.

John Rhoton's views are his alone and do not necessarily represent those of Microsoft or CBSi.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Sunk Cost

    The IT guy should have been given a brief tutorial on sunk cost. It doesn't matter what you spent in the past. It matters about your profitability into the future. If moving into the Cloud is going to be more profitable for the company into the future than maintaining their own systems than the company needs to move into the cloud and liquidate the systems it has on site.

    There are the security risks of moving your data to an open system. There are also risks of corporate espionage. When moving to the cloud you may need to invest in more encryption technologies/software solutions to secure the data rather than just dismissing it for an onsite hardware solution. Onsite solutions will become increasingly expensive as more businesses switch to the cloud. The cloud is a disruption and thus you need to understand the economics of disruption to see the emerging trends.
  • The cloud sounds great until... can't reach it because the 'internet' is down. At least with a local disaster you have something to do like deal with the local disaster. It is very annoying when what little we have 'out there' isn't available and there is nothing we can do about it.
    • Re: The cloud sounds great until...

      Hi Net-Tech_z, you make a good point. I would say it is an important consideration that should be weighed as part of a more complete risk analysis.
      Public cloud does depend on a network connection, which is a reason many of my customers are pursuing private and hybrid models.

      Best regards,

      John (@johnrhoton)
  • Re: Sunk Cost

    Hi bcreeo8 - thanks for your comment! You make some excellent points.

    I would agree that sunk cost should not be the deciding factor in making financial decisions. However, I do believe it is relevant for two reasons. First, any disruptive change will require support from a wide range of stakeholders. If they feel that the change will expose poor past investments or cast them in a negative light, then they may be resistant to the whole process.
    A second consideration is that heavy past investment in infrastructure will skew the current financial parameters. In other words, the incremental cost for continuing with private assets will be lower if the whole infrastructure is current and scaled for future capacity needs. So it is more likely to be sufficient.

    Regarding security, my most recent book is on the topic of cloud security. So I definitely appreciate its importance. Obviously there are also some security benefits in using a public cloud and there are a variety of options to protect data in any delivery model but, as you say, it is important for the decision-makers to fully understand the issues and economics involved.

    Thanks again for your thoughts!

    John (@johnrhoton)
  • It'll become cyclical.

    Years ago I worked for a law firm that handled "insurance subrogation". (Basically, Smith causes a car accident with Jones but doesn't have insurance. Jones' uninsured motorist coverage pays Jones, then sues Smith to get their money back.)

    After being there about 5 years we quickly lost most of our clients. My boss explained:

    Subrogation goes in cycles. For 5 years the attitude of the accountants is, "Why the heck are doing this in house? It would be cheaper to outsource it!"

    Once everything is outsourced, the attitude of the accountants changes to, "Why the heck are we paying someone else to do this? It would be cheaper to do it in house!"

    I had started with the law firm right at the start of "Let's outsource it," and 5 years later it switched.

    Chances are, the same thing will happen at least 2-3 cycles with in-house versus external. Since internal equipment can be stretched to at least a 5-year cycle, chances are good the cycle will last up to 15 years. Eventually it will be almost exclusively external, but probably not within a decade.
    • Re: It'll become cyclical.

      Hi Rick_R,

      Thanks for your thoughts. I have also observed that the pendulum often swing back and forth on some of these strategic decisions, including outsourcing and insourcing.

      What are your thoughts on the implications of these cycles? My view is that they help to justify optimization of the technology using the resources as hand rather than waiting to embark on a more ambitious journey. That said, they also reinforce the need to design a flexible architecture that will be able to evolve through future cycles.

      Does this sound reasonable to you?

      Best regards,