Hyper-convergence is the topic of the year in the datacentre - but how can it save you money?
It's often the case that IT teams are over-stretched and under-budgeted: the business demands that they deliver on the technology industry's claims for agile and flexible IT infrastructures that allow the business to pivot quickly to meet market demands.
The price for this agility has, however, traditionally been large investments in infrastructure which can handle peak loads - a criterion which results in a large up-front cost to cover those peaks. The costs don't stop there. The three key datacentre systems - servers, storage and networking equipment - need separate specialist teams to manage them, adding not just cost but complexity. Essentially, too much time and effort is being expended on day-to-day activities, just keeping the lights on rather than adding real value.
In additional, specialist systems such as mission-critical database servers, tuned and optimised for a single application, jostle for attention with the myriad of other devices in the datacentre, each of which requires configuration and management attention.
Keep it simple
The consequence of inflexible architectures and over-specialisation is a struggle to add value. Key to untangling the conundrum is a hyper-converged infrastructure, which can provide a fundamental building block for the enterprise cloud. So instead of three separate systems, a hyper-converged system includes all the components in a single, rack-ready box. The system is likely to present a hypervisor that can be loaded up with applications, with little or no configuration required.
Running under an OS designed to manage all those systems as a single entity, the 'one box does it all' approach reduces management overheads, removes the need to fine-tune each server, network or storage system to work seamlessly with other elements in the datacentre, and enables scalability in a more cost-effective manner. And as a single system, a hyper-converged box can be managed more easily.
Scalability and resilience
The benefits of scalability derive from the hyper-converged system's stackability. You need more resources? Just buy another box, and connect it up. This confers the ability to install just the amount of compute power that you need as you need it, rather than having to anticipate demand for up to three years in advance, which is near-impossible to get right, and is both costly and inflexible.
A hyper-converged architecture built of multiple near-identical systems also lends itself to extreme resilience. This is akin to the principle that the web giants use: they deploy commodity components to reduce costs, while accepting that hardware will inevitably fail and designing the system to cope with failure seamlessly.
That level of fault tolerance is likely to include the provision of redundant copies of both data and hardware, along with a reduction of reliance on RAID which, with today's multi-terabyte mechanisms, creates problems of vulnerability during lengthy rebuild operations. Hyper-converged system work in a similar principle through stacking, so the falure of any single system is automatically worked around.
This tight integration of datacentre elements makes for easier management, improved performance from an optimised OS, and a turnkey solution to the problem created by systems diversity.
The business benefits are clear: they include less time spent operating servers and more delivering applications to enable an agile business, the ability to provide easy-to-use infrastructure for new projects quickly, and to act as a bridge between the enterprise infrastructure and the public cloud.