X
Business

Tera-architecture. Gartner says bank on it.

If you believe what Gartner vice president and research fellow Martin Reynolds has to say, then we're heading for a trainwreck unless we figure out how to get the rapidly growing number of connected devices that are coming online in the next few years to operate with practically no human intervention.  By operation, he's not only referring to their ability to run, but also to connect, 'Because now, like the Internet, failure doesn't matter.
Written by David Berlind, Inactive

If you believe what Gartner vice president and research fellow Martin Reynolds has to say, then we're heading for a trainwreck unless we figure out how to get the rapidly growing number of connected devices that are coming online in the next few years to operate with practically no human intervention.  By operation, he's not only referring to their ability to run, but also to connect, 'Because now, like the Internet, failure doesn't matter.' to assimilate into the right network, to self-manage and configure,  and automatically report whatever data they have (that needs reporting) to whatever needs to collect data from them.  Reynolds refers to this giant morass of infrastructure with which humans must have minimal contact as the tera-infrastructure.  Said Reynolds of the tera-infrastructure today:

Connected devices are going to grow at an exponential rate and it's starting now.  The wireless toll passes in your cars.  Contact-less payment systems in your wallets; even your employee badges that let you into building with a wireless technology.   These are all tiny connected wireless computers.  Your businesses are going to ask you to capture information from these networks and bring it back for business value.  But there are two challenges.  First, we have to be able to manage and support this infrastructure of billions of tiny devices. We aren't going to do that one at a time.  We cannot deploy human resources to make this work.   These devices have to be self-aware, self-managing, self-configuring and self-connected with no labor required.  Without that, we'll never be able to afford to deploy them.

The problem, according to Reynolds, is that even if we get to that point, the world is ill-equipped to handle the load that follows.  One problem is that investment in IT is out of alignment with investment in human resources with both delivering returns on investment that are completely out of whack with each other.

The second challenge is that now, our infrastructure must support this incredible number of transactions coming back.   Moore's Law: multi-core processors tell us that we'll have enough equipment to handle this task.  But there's a budget challenge.  Economic data tells us that a dollar spent on IT infrastructure in 1996 gives us about $5 worth of stuff today.  The message is that we can have more infrastructure without expanding our capital budgets. On the other hand, that same dollar spent on labor only gets 80 cents worth of output today.  It's inflation.  So, we have two trends. Our stuff is going up.  And our people are going down.  It says that even if you get in balance, in the following year, you'll have too much stuff, and too few people.  We can't live with this. Many of you fight this challenge today. .. We need our IT infrastructure to be built from granular components that self-assemble into a functional network: that perform the tasks we ask of them without management. 

Implying a commodity like nature to these granular, self-managing components, Reynold seemed to theorize that less time and money must be spent worrying about keeping something bigger and more all-self-encompassing (of functionality) both available (as in five 9's availability) and sufficiently powered (both of which can be expensive propositions).  If a granular more focused component fails, you just replace it. This reminds me of the idea behind Appistry hive computing model where extremely commodity-like PCs -- ones that you can buy in Wal-Mart -- self assimilate into a hive of computers that collaborate on nearly supercomputer-sized tasks.  If a computer fails, much the same way a bee-hive can easily accommodate the death of a worker bee, the hive of computers easily accommodates the loss and/or the addition of a new machine without human intervention. According to Martin, the reduction in associated cost can lead to big benefits:

When this happens, something else changes.  Because now, like the Internet, failure doesn't matter.  I can start pulling out cost and reducing power that today, those two, [are used] to maintain very high levels of uptime, and instead, switch my investment to commodity like hardware. That's going to give us the possibility to reduce the cost of computing by a factor of 10 and truly scale our infrastructure.  This is a tera-architecture.

In the same breath however, Reynolds seemed to imply bundles of hardware and software that are more tightly bound to each other than is typically found today.   I've been hearing a lot about this concept lately -- the marriage of hardware to software.  I can't help but wonder if keeping hardware and software as distinctly separate layers in the overall solution stack is the better approach because if you don't, you lose the commodity benefits that result from abstraction taking place between stack layers (where standards often play a role).  Reynolds' example was Google. 

Google uses commodity hardware to build its amazing infrastructure of resources that deliver the Google applications.  The those tasks -- the Google applications -- will only work on the Google infrastructure.  This is going to change the way that we write software.  The challenge for IT is "can your IT support a transactional load that's 10 times or 1000 times what it is today.  Because that is what the tera-architecture will do.

Editorial standards