More Topics
Paid Content : This paid content was written and produced by RV Studios of Red Ventures' marketing unit in collaboration with the sponsor and is not part of ZDNET's Editorial Content.

Slashing storage costs with cloud

A look at how to arrange the best storage deals, and manage 'hot' and 'cold' data
shutterstock243830668.jpg
Time was when you could calculate storage costs on the back of an envelope, using the simple equation of price times GB. Storage systems have evolved considerably since then, however, sprouting new features and technologies, such as tiering, flash caching, thin provisioning, and deduplication. The list goes on, but all make the calculation of value more complex.

Making that same calculation for cloud storage providers can and should be considerably less complex, not least because when the IT department charges internal clients, a simple metric means the cost is easy to understand, and helps reduce the likelihood of over-complexity driving the growth of shadow IT operations in other parts of the business.

One key to procuring storage is ensuring that your desired outcomes in terms of features and storage system design are clearly enunciated at the start of the buying process.

When evaluating storage services, you need to ensure that you're comparing apples with apples, and measuring capacities using the same metrics, for example raw GB without deduplication, compression, or other data manipulations. By the same token, it's essential to start with the metrics that are important to you. For example, if either performance or capacity are your overriding concern, try to compare benchmarks or capacities from different technologies, such as spinning mechanisms vs SSDs.

You also need to consider what types of data you plan to store, using the trade-off of cost versus access times. Cold storage - data that is rarely accessed and is considered an archive - is best stored on lower cost media such as tape or inactive disks. Access times might be in minutes or even hours, but for the very occasional access, this is not usually a problem.

Warmer data, for example data backed up in the last one to six months, would be a candidate for low-cost SATA storage, with access times measured in seconds. Live data from production process is considered hot, and needs to live on the medium with the fastest access times available. A storage provider's charges will naturally scale by access time, in this scenario.

Over time, however, because of economies of scale and the provider's ability to spread costs across multiple customers, storage costs are highly likely to fall, compared to the effort of building and maintaining your own storage infrastructure. Given the rates of data growth now being experienced, it is probable that in the near future the cost of maintaining your own SAN will, except in unusual circumstances, become the exception rather than the norm.

Editorial standards