One of the reasons virtualization (the precursor to cloud computing) gained popularity in the early 2000s is that companies had too many servers running at low utilization. The prevailing wisdom was that every box needed a backup and under-utilization was better than maxing out compute capacity and risking overload.
The vast amounts of energy and money wasted on maintaining all this hardware finally led businesses to datacenter consolidation via virtual machines, and those virtual machines began migrating off-premises to various clouds.
The problem is, old habits die hard. And the same kinds of server sprawl that plagued physical datacenters 15 years ago are now appearing in cloud deployments, too.
According to a recent survey from RightScale, 35 percent of cloud spending is wasted via VM instances that are over-provisioned and not optimized. The report found that most enterprises run their virtual instances 24/7, many VMs are running at less than 40 percent of CPU and memory capacity, and old backup snapshots and other unattached data repositories are clogging up cloud storage resources.
It turns out that the ease and elasticity of the cloud are a double-edged sword. When spinning up new instances is effortless, who has the discipline to keep track of and sunset resources when they're no longer needed?
This was one of the lessons learned at Ecolab, a global provider of water, hygiene, and energy technologies and services. Ecolab works with large-scale facilities around the world, monitoring and managing water systems using a vast network of sensors and probes. A team of some 60 developers works on mining this data for performance insights and trendspotting.
Craig Senese, Director of Analytics and Development at Ecolab, says the transition from on-premises datacenter to cloud was critical, as physical resources were reaching their limits. Ecolab was already using Microsoft technologies to manage its infrastructure and analytics, so the Microsoft Azure Cloud was a logical fit.
Once Azure was deployed, however, developers began to leverage resources without focusing on optimization and cost-efficiency.
"I think the biggest lesson that we've learned to this point is that it's a different model," Senese said. "You've gone from having our own servers, having our own datacenter working through IT to get the resources you need, to basically carte blanche for our developers where they can add and remove resources as needed. The lesson learned there has been that we really need to make sure that everyone is educated on our plan as an architecture, our plan as a resource model, because it's very easy to spend. We need to make sure that we control that and we're not spinning up resources uncontrollably.
"We have a large team, and making sure everyone is on the same page with the strategy of how we want to deploy in the cloud is important."
Being new to cloud computing, Senese and his team weren't sure where and how tweaks could be made to optimize Ecolab's cloud usage and efficiency. Fortunately, Microsoft reps helped assess the environment and workloads, then build out a plan.
"We started by working with Microsoft to see where we could optimize, and they were great in helping us understand where we could optimize our spend," Senese said. "We do a lot of compute. We do a lot of data analytics, and we wanted to see whether we can optimize spending, because we were new to this space."
Once the team found out more about Microsoft's strategies and created a resource model that could support existing workloads and scale as needed, they were able to spread the word among other areas of the business.
To find out more about Ecolab's setup and how Microsft's experts can help you guide the discussion forward, please visit zdnet.com/Microsoft-cloud.