Cloud customers are still paying for twice as much as they need

Cloud customers are still paying for twice as much as they need

Summary: "Over-provisioning" isn't dead, it has just moved to the cloud. Large customers are only using half of the capacities they've paid for, according to an independent survey undertaken for ElasticHosts.


An independent survery of 200 UK-based CIOs has revealed that they are only using about half of the cloud capacity they've bought and paid for, and that 90 percent of them see over-provisioning as a necessary evil.

Special Feature

Cloud: How to Do SaaS Right

Cloud: How to Do SaaS Right

Software as a Service offers irresistible benefits for organizations of all sizes — from cost savings to scalability to mobile accessibility. We offer guidance on avoiding the pitfalls of the cloud and choosing your SaaS partners well.

Cloud provider ElasticHosts, which commissioned the survey, says: "Essentially, bad habits like over-provisioning and sacrificing peak performance are being carried from the on-premise world into the cloud, partly because people are willing to accept these limitations."

In other words, CIOs who have used over-provisioning to handle peak demand in their server rooms are doing the same thing in the cloud, where it shouldn't be necessary — assuming, that is, the cloud provider can handle the peak demand from multiple customers, even if these peaks coincide at lunchtime.

Further, 88 percent of these CIOs admitted they sacrificed some performance at peak times in order to control costs.

The survey was conducted online by Vanson Bourne. It surveyed 100 CIOs from organisations with 1,000-3,000 employees and 100 from organisations with more than 3,000 employees. Of the respondents, 50 were from financial services, 50 from manufacturing, 50 from retail, distribution and transport, and 50 from other commercial sectors.

A major part of the problem is that most companies are managing their cloud services manually, which adds to the cost. According to the survey, only 14 percent have automated the process. Automation could reduce the amount of overprovisioning required and thus cut costs.

Richard Davies
Richard Davies Photo: ElasticHosts

Richard Davies, ElasticHosts' CEO, says: "This research really highlights the short-comings of the pay-by-capacity billing model."

ElasticHosts is selling Linux IaaS (Infrastructure as a Service) where customers can be billed separately for on-demand memory, processor and disk use as the bandwidth they use. It can scale these very quickly because it is providing Linux containers rather than virtual machines (VMs). Several customers may be sharing a VM, though each company can only see its own partition.

This could be dismissed as ElasticHosts marketing. However, as I pointed out in an earlier story, the increasing support for LXC containers in Linux distros and the growing popularity of Docker suggests that containerization will become widespread. (See: Next-gen cloud services could save users almost $2 billion a year)

Davies says: "as the research shows, and as half of respondents recognised, cloud as we have it today really isn’t truly elastic — it does not expand and retract automatically to meet demands, and it is not paid for like a utility, based on consumption. However, with next-generation cloud and containerisation technology, change is afoot.

"Our question to organisations is: why would you ever choose to pay for capacity you aren’t using, sacrifice performance, and eat up your systems administrator’s time, when you don’t need to? Soon companies will be asking themselves the same thing, which will mark the end of capacity-based billing."

Bar chart shows over-provisioning
From the survey, 90 percent of CIOs see over-provisioning as a necessary evil (green bars) though some industry sectors are better than others... Credit: ElasticHosts


Topics: Cloud, Linux, Enterprise 2.0

Jack Schofield

About Jack Schofield

Jack Schofield spent the 1970s editing photography magazines before becoming editor of an early UK computer magazine, Practical Computing. In 1983, he started writing a weekly computer column for the Guardian, and joined the staff to launch the newspaper's weekly computer supplement in 1985. This section launched the Guardian’s first website and, in 2001, its first real blog. When the printed section was dropped after 25 years and a couple of reincarnations, he felt it was a time for a change....

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Yep

    As much as I like Public Cloud, overprovisioning is only one little nasty area that CIO's need to deal with. In my experience, ALL of the cost calculators are error prone and sometimes to a great degree which makes it very hard to predict the real cost savings, if any, from using Public Cloud. Price/Performance and productivity are other additional areas of concern. Ever waited 20 minutes to spin up a virtual machine? Not a lot of productivity there when I could use something like a CriKit and have a virtual machine in seconds - on my desk. I have used AWS, Azure, Rackspace, HP, Digital Ocean, Profit Bricks, Linode, Century Link and IBM Blue Mix. Blue Mix seems to smoke them all in startup speed and some of them are abysmal in comparison. No need to detail security issues, because that is on fire at the moment. To net this out, Public Cloud has a long way to go to be truly considered secure, reliable and predictably cost-effective. Don't shoot the messenger.
    Cloud Guy
  • The Cloud is being pushed

    Cause if they can make it work it's like printing money!
  • Interesting

    The real question is this: Are large customers finally reaching SLA (service level agreements) or are they: "Large customers are only using half of the capacities they've paid for?"

    To some point it is a question of the value of the transaction, say for an example it is mission critical, or conducive to getting the right answer fast: I want to know the drug interaction upon a patient at noon?

    Like it or not the industry is still evolving, give the circumstance where a CEO asks the CIO, heh we bring on this new hospital to our system, what is the quality of care impact? I would imaging that that is the case where the SLA is warranted and the tweak and peak analysis though a cost concern not the only one in a decision.

    Finally I bet there are IT departments that wish at XMAS they had 2x the amount of rack space and clean power to scale, but they don't.

    Give it another generation or so and once there is an ubiquitous solution this discussion boils down to the cost discussions of say live data versus tape data