Cloud platforms and open standards
For most businesses, the path to the cloud begins in an on-premise data center, which typically runs a mixture of traditional, siloed, enterprise applications and virtualised 'private cloud' workloads. The latter are often based on VMware's proprietary vCloud technology.
When it comes to exploiting public cloud services, either for 'cloudbursting' (managing load spikes) in a hybrid cloud architecture or as a brand-new deployment, the issue of vendor lock-in raises its head. No-one wants to entrust large parts of their IT infrastructure to one vendor's cloud platform, only to find that it's impossible to migrate to another vendor's cloud should the need arise. This is the impetus behind open-source cloud platforms.
The best known is OpenStack, a Linux-based open-source software stack with a supporting Foundation that looks after promotion and governance. OpenStack was initiated in 2010 by hosting provider Rackspace and NASA, and now boasts wide industry support — including giants like Cisco, Dell, HP and IBM.
There's a lot of interest in OpenStack, in particular, which is widely positioned as the main rival to VMware (which powers public and hybrid clouds via its service provider partners). Despite these ongoing 'cloud wars', it's worth noting that the biggest public cloud platforms — Amazon Web Services, Google Compute Engine and Microsoft Windows Azure — are all largely proprietary in nature.
An open, fully interoperable set of cloud software platforms and APIs will remove a potential barrier to the wider adoption of cloud technology, allowing more businesses to reap its associated benefits — on-demand self service, rapid scalability and transparent pricing.
Data centers are complex and expensive facilities, and running them efficiently requires multiple skills in the areas of IT infrastructure management, energy management and building management. Little wonder that, as we've seen, businesses are increasingly outsourcing these tasks.
Whether you maintain a traditional on-premise data center, outsource your IT to the public cloud or adopt a hybrid strategy depends on the mix of workloads involved. If you're migrating an existing on-premise workload to the cloud, or looking to handle load spikes, a hybrid cloud solution might be appropriate. If you're deploying a brand-new workload, on the other hand, it could be better to go all-in with the cloud from the start. However, if you're nervous about entrusting sensitive or mission-critical business processes and data to the cloud (for security, compliance or reliability reasons, for example), you may want to keep them under your control in your on-premise data center.
The 21st century data center, be it on-premise or outsourced, will increase its efficiency and flexibility via a combination of virtualisation and consolidation throughout the IT stack (server, storage and networking), the increasing use of low-power hardware such as microservers and solid-state storage, modular data center construction, green power supply and cooling technologies, and DCIM software that orchestrates data center management and models future capacity expansion scenarios.
If you make use of any form of outsourcing, remember that the cloud isn't some magical location where everything works perfectly all of the time. Due diligence is required: check service providers' SLAs, find out where your data will reside and how easy it is to move around, discover what security and backup provisions are available, and remain on the lookout for any hidden costs.