Cloud operating models represent a step up from the traditional and sometimes time-consuming "request resources from IT" process, explains Craig Senese, Director of Analytics and Application Development at Ecolab. Developers will need some time to get comfortable with new tools as they spin up their own compute and storage environments.
To avoid overspending in this newly elastic and agile environment, Senese and his team consulted with Microsoft on allocation guidelines and on creating comprehensive policies to keep costs in line and maximize value.
Bhavik Shah, Application Development Manager at Ecolab, says the team kept its on-premises data center environment running while phasing in Microsoft Cloud Services. Replicating databases and data across both systems, Ecolab was able to transition gradually by moving select analytics workloads to the cloud, then expanding that selection, and finally ingesting data directly from sensors to the cloud.
One critical element of Ecolab's process is the "Cooker," or data normalization app that takes information in disparate formats - from Ecolab sensors, customer equipment, and third parties, for instance - and converts it all into a standard format for analysis. This approach enables Ecolab to bridge multiple sources and even multiple generations of its own data collection software.
When data-driven optimization is the product, it pays to have IT experts collaborate closely with sales and business representatives when presenting to customers. "We work directly with our business folks," Senese said. "We don't rely on the business to bring [ideas] to the customer alone, and they don't rely on us to bring it alone. It's definitely a team effort, because our customers are as interested in digital as we are. So we're educating them while we're presenting these models, because they have to understand where the data came from. We do that in this collaborative approach with business and our digital organization."
Thanks to the near-infinite scale of Microsoft Cloud Services, Ecolab can analyze decades of sensor data and performance metrics, helping developers build and train algorithmic models that can predict service alarms and component failures before they happen.