A look at a 7,235 Exabyte world

A look at a 7,235 Exabyte world

Summary: IDC says byte density will surge and the data storage and business implications will be huge. Here are a few thoughts on the aftermath.


Raw storage capacity, known as byte density, will surge from 2,596 exabytes in 2012 to 7,235 exabytes in 2017, according to research firm IDC.

Aside from buying a bunch of big data and storage stocks, it's a bit unclear what this data surge is going to yield. The business world will either look like an episode of Hoarders or we'll glean some real insights.

IDC argues that if data is going to be come insight, organizations are going to have to prioritize, store and retrieve information easily. Things like social data will need to be used for new business models.

exabyte breakdown IDC
Source: IDC

Tape and optical storage will be tossed as data moves to the cloud. IDC sees a data repository in the cloud and information will be viewed as a natural resource.

exabytes by region

It's hard to doubt IDC's data directionally. Data is growing exponentially and has to be stored somewhere. Here are a few thoughts on what IDC's storage and byte density predictions mean:

  1. Data will have to be governed. We have generic privacy laws today, but if data is truly a natural resource like oil and gas it will have to be regulated as such.

  2. Industries will be reordered based on data and analytics. Every company will have broad big data plans. Many of these organizations will fail miserably. In many ways, analytics and data insight systems will become like enterprise resource planning applications a few decades ago. ERP had lots of promise to revamp businesses and also implementation disasters.

  3. New technologies like Hadoop will be the only way to navigate all of this data. The enterprise tech establishment will be rattled and a new pecking order will emerge.

  4. User experience will matter. The big data companies are fascinating and it's fun to watch queries come back with insight. The issue: The user interface on most big data systems keeps the technology in the hand of a few. Having exabytes of data is one thing. Giving the troops in the field insight is another issue entirely.

  5. Owning the data will be everything. The vendors that capture the most data win. Period.

  6. New business models will emerge. Data analysis and brokering will create entirely new models. Look for Facebook to be a player along with Google.
exabyte countries

Topics: Storage, Big Data, Data Management

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Good analysis

    Interesting analysis. Just an opinion: analytic tools are overrated, not everybody need that actually. It's very hard to find something useful from terabytes of data. So the analytic tools won't be as popular as ERP systems.. again, just an opinion...
    • Not so sure....

      There is some good stuff here but I think it begins to enter the realm of conjecture upon conjecture, and it seems to be somewhat static thinking, maybe not taking account of the fluidity that we are seeing entering system design and function.

      "Tapes... will be tossed.." - well generally yes but they are incredibly dense mediums and don't suffer electro-static decay in the way that offline harddrives can do.

      "...information will be viewed as a natural resource" - Ok are we talking "information" or "raw data"? For the latter, well as a basic example you can pipe a loop to /dev/null all day, there is no way that raw data is going to run out so it's not a natural resource. For the former, for information to be created we require CPU time memory and access to raw data (which we know can be infinite) and *maybe* storage if we want to keep the information we have created. Storage is the only one that is not transiant. So as long as we adhere to the formula which goes something along the lines of...

      "the cost of creating/buying storage dense enough to hold the information is less than or equal to the financial value of having the information"

      ... (which may be the formula the NSA use to justify their datacentres) then we will always collect more information and therefore information is not a natural resource. Possibly the only way these could be considered natural resources is that if you chain the dependent parts of their production all the way to the exreme you end with electricity which may indeed rely on a natural resource. Don't get me wrong though there are plenty of big companies that would love you to thing that data is a natural resource, such as ISPs, that way they could justify charging ridiculous amounts for bandwidth caps! Do ISPs see a yearly/bi-yearly cycle of floods and droughts?

      "Owning the data will be everything. The vendors that capture the most data win. Period." - No, definitely not "Period". That's a silo. And that's a problem. For me this is an example of thinking from my generation or even the one before mine. Do kids these days silo their music or stream it? And when they are older and designing the systems that we are designing today I suspect they will take a different mentality - stream when you need it. If your systems get so bloated that they can't move quick that's a problem. Web 1.0 was all about hypertext, basically just text for human consumption. Web 2.0 was about rich content, video, sound etc, for human consumption. Web 3.0 is said to be "the internet of things" and therefore a network where one object can talk to another object, data available via APIs in a machine readable form - data for machine consumption. So although we could create silos so long as the storage costs equal the benefit of the store, it will make our systems so bloated that we can't turn them and change that quickly, we can't be agile in an era in which we need to be agile. But along side the option to store everything we see more and more options to fetch things via more and more APIs. This brings an era of companies or databases storing one type of data and the option for our systems to subscribe to many sources and create the information we need when we need it. Store the fundamental elements, don't store the compound that is created from the elements and can be recreated from the elements.

      If we all silo up to the max we will duplicate everything many times and collapse in our own bloat. If we keep the data or information that is key to our businesses and offer/fetch via API then we can stand on each on each others shoulders and progress and prosper. The vendors that create the most understandable information from diverse raw data sources will win. (In my opinion.)
  • I thought I was a geek...

    I thought I was a geek, but I still had to look up the actual size of an exabyte. Were you curious, too? I'm surprised the author didn't make the point...

    "An exabyte is a unit of information equal to one quintillion (10 to the 18th power) bytes, or one billion gigabytes."

    That's all??? :-)
    • That's all, folks!

      Seven thousand billion billion bytes doesn't sound unreasonable. That's 7,000,000,000 one-Gigabyte hard drives, or roughly 1,000 such drives apiece for each and every human being currently breathing. (Up from about a third of that last year.) If anything, it sounds high.

      I'm sure that the numbers would be a lot lower if the data were de-duplicated.
      rocket ride
      • I regret..........

        that I have but a few bits to give to my country.
  • Lots of duplicate data

    Remove all the duplicate "adult" data, and we could probably store all the good data on a few floppy disks.
  • But what do the cloud storage providers use?

    The article says "Tape and optical storage will be tossed as data moves to the cloud." - okay, the end-user (business or individual) stops making backups to tape or optical, but the cloud host has to put the data somewhere. Just from an ecological viewpoint, I really hope they aren't saving backup tapes that are never touched on active online media....