Mammoth growth in storage volumes is a fact of life, but even so it's helpful to pause occasionally and try and work out whether our information strategies have fallen hopelessly out of step with the pace of technological growth and changes in costs.
I was reminded of this during the week while hanging out at the Hitachi Data System SAN Technology Centre in Odawara and talking to Hubert Yoshida, HDS' chief technology officer. Yoshida predicts that storage volumes are about to cross an important threshold.
"We expect that within the next three to five years we will have a customer with an exabyte of data," Yoshida said. "Today we have customers with close to 100 petabytes."
What can you realistically do to manage a million terabytes or more of information (or 100 petabytes for that matter)? Whatever the answer, it's unlikely to be found in current systems.
"Even though in storage we have been doubling in capacity almost ever year, the basic architecture of storage systems is in many cases 20 years old." That approach won't be sustainable in the exabyte era, Yoshida suggests: "It requires a new approach and architecture. We have to have a fundamental change in the way we do architectures and implement technologies. It's not just a matter of getting bigger disks, we have to change the way they work together."
A critical area for improvement is interconnect speeds. "As we move towards exabytes of storage, we can't afford to move every byte across the network," Yoshida said. As well as faster pipes, intelligent processing (such as de-duplication and compression) can help simplify that problem.
In the short term, one useful strategy is to focus less on three-year storage capacity plans, Yoshida suggested. "Instead of buying all this big storage today, buy what you need as you need it — because capacity always gets cheaper."
Angus Kidman travelled to Japan as a guest of HDS.