Many businesses are experimenting with cloud-based services that offer plenty of storage. But scratch the surface of the cloud and you find enterprise computing and enterprise storage. It may be architected differently, and it's a different expenditure model — but as far as Amazon, Google and Microsoft are concerned, they're renting enterprise facilities at a distance.
Despite the attractions of today's cloud services, most enterprises, faced with limited network bandwidth, no strong security model and ill-defined storage performance, are going to keep their mission-critical data in-house rather than in-cloud for the moment.
So what will it take to build a credible enterprise storage cloud, and what should enterprises be doing in the meantime?
Read about the data debate in full in PDF format.
Several storage-related issues have received a lot of coverage recently, but for various reasons should prove less troublesome than expected.
The threat of power-induced meltdown on enterprise storage strategies has been widely broadcast for years now. But before you recast your storage architecture on an ultra-low-power model, consider the fact that, according to research company IDC, power requirements are going to level off by 2013 or 2014.
The reason, says IDC, is that recent hard times persuaded everyone to take cost savings seriously — particularly in using storage more efficiently.
Meanwhile, 2.5in. hard disks and solid-state drives (SSDs) use a lot less power, and virtualisation continues to push power requirements down as ever more capable mainstream chips run more tasks for the same or less wattage.
Eventually, the analysts say, the pause in power consumption growth will end and the numbers will edge up again. By then — the end of the decade — faster and cheaper networking will make it easier to swiftly share data around the world, following the sun or wind, and a number of advances in materials science promise to kick in and cool things down.
There have been dire forecasts that hard disk storage densities have peaked. The last big breakthrough was the discovery of giant magnetoresistance in the late 1980s, which pushed hard disk drive storage densities into new areas. But we're close to the 1 terabit per square inch limit, which could close developments down by 2013.
Not so, according to the recently formed Advanced Storage Technology Consortium, which includes Hitachi GST, Marvell, Seagate, Western Digital, Xyratex, LSI, Texas Instruments and Fuji Electric. Three other technologies — Shingled Write Recording (SWR), Heat-Assisted Magnetic Recording (HAMR) and Bit-Patterned Media (BPM) — are looking good to get past that barrier. All need work, but the combination of these three technologies should, the makers say, produce a 40TB hard disk by 2014 or 2015.
The continued existence of magnetic tape seems to annoy those people who see it as a 60s throwback and unworthy of the modern world. Yet nothing can touch it for cost-effective, power-friendly, scalable and secure secondary data storage.
Nothing can touch magnetic tape for cost-effective, scalable and secure secondary data storage.
Tape has error levels orders of magnitude lower than the disks it backs up. It also has a new open standard called LTO (Linear Tape Open) to go up against Quantum's DLT and Sony's AIT.
Tape's role is changing. With deduplication managing to compress data by ratios of up to 20:1, backing up data to disc has become more popular, especially as restoration can be a lot swifter. However, tape continues to be particularly well suited to off-site archives, and is reaping the benefit of an open standard helping to drive down prices.
A combination disk and tape backup regime will work for a lot of people for some while yet. You'll need an hierarchical storage manager, but the maths remains true: tape is the best way to keep a lot of data very safe.