When you think cloud, whether private or public, one of the key advantages that comes to mind is speed of deployment.
All businesses crave the ability to simply go to a service portal, define their infrastructure requirements and immediately have a platform ready for their new application.
Coupled with that you instantly have service level agreements that centres on uptime and availability. So the cloud provides an unavoidable opportunity for businesses to procure infrastructure as a service and focus on delivering their key applications.
But while the understanding of cloud computing and its benefits have matured within the industry, so too has the realisation that what's currently being offered still isn't good enough for their mission critical applications.
There is still a need for a more focused and refined understanding of what the service level agreements should be and ultimately a more concerted approach towards the applications.
So while concepts such as speed, agility and flexibility remain synonymous with cloud computing, its success and maturity ultimately depend upon a new focal point — namely velocity.
Velocity is distinct from speed in that it's not just a measure of how fast an object travels, but also in which direction that object moves.
In a public cloud no one can dispute the speed.Through only the clicks of a button you have a ready-made server that can immediately be used for testing and development purposes.
But while it may be quick to deploy, how optimised is it for your particular environment, business or application requirements? With only generic forms the specific customisation needed for a particular workload might be sacrificed for the sake of speed.
Service levels based on uptime and availability are not an adequate measure or guarantee for the successful deployment of an application. It would be ludicrous to purchase a laptop on a contract that merely stipulates it will remain powered on even though it performs atrociously.
In the private cloud or traditional IT example, while the speed to deployment is not as quick as that of a public cloud, there are other scenarios where speed is failing to produce the results required for a maturing cloud market.
Multiple infrastructure silos will constantly be seen to be hurrying around, busily firefighting and maintaining "the keeping the lights on culture" all at rapid speed.
Yet while the focus should be on the applications that need to be delivered, being caught in the quagmire of the underlying infrastructure persistently takes precedent with IT admin having to constantly deal with interoperability issues, firmware upgrades, patches and multi-management panes of numerous components.
Moreover service offerings such as Gold, Silver, Bronze or Platinum are more often than not centered around infrastructure metrics such as number of vCPUs, Storage RAID type, memory — instead of application response times that are predictable and scalable to the end user's stipulated demands.
For cloud to embrace the concept of velocity the consequence would be a focused and rigorous approach that has a direction aimed solely at the successful deployment of applications that in turn enable the business to quickly generate revenue.
This approach would also entail a focused methodology to application optimisation and consequently a service level that measured and targeted its success based on application performance as opposed to just uptime and availability.
With more businesses looking to place more critical and demanding applications into their private cloud they need an assurance of an application response time that is almost impossible to guarantee on a mixed workload infrastructure.
As the cloud market matures and the expectations that come with it with regards to application delivery and performance, such procedures and practices will only be suitable for certain markets and workloads.
So for velocity to take precedent within the private cloud, cloud or even Infrastructure as a Service model and to fill this cloud maturity void, infrastructure needs to be delivered with applications as their focal point.
That consequently means a pre-integrated, pre-validated, pre-installed and application certified appliance that is standardised as a product and optimised to meet scalable demands and performance requirements.
This is why the industry will soon start to see a new emergence of specialised systems specifically designed and built from inception for performance optimization of specific application workloads.
By having applications pre-installed, certified and configured with both the application and infrastructure vendors working in cohesion, the ability for private cloud or service providers to predict, meet and propose application performance based service levels becomes a lot more feasible.
This entails a converged infrastructure that rolls out as a single product and consequently has a single matrix upgrade for all of its component patches and firmware upgrades that subsequently also correspond with the application. Additionally it encompasses a single support model that includes not only the infrastructure but also the application.
This in turn not only eliminates vendor finger pointing and prolonged troubleshooting but also acts as an assurance that responsibility of the application's performance is paramount regardless of the potential cause of the problem.
The demand for key applications to be monitored, optimised and rolled out with speed and velocity will be faced by not only service providers and private cloud deployments but also internal IT departments who are struggling with their day to day firefighting exercises.
To ensure success, IT admin will need a new breed of infrastructure or specialised systems that enables them to focus on delivering, optimising and managing the application and consequently not needing to worry about the infrastructure that supports them.