The edge-centric Internet

The edge-centric Internet

Summary: Why isn't the edge of the network the center? More computes and storage gets shipped to consumers than to Internet data centers, so consumers should help power the Internet.

SHARE:

Why isn't the edge of the network the center? More computes and storage gets shipped to consumers than to Internet data centers, so consumers should help power the Internet. And get paid for it.

P2P vs Internet data centers A new project, spearheaded by National ICT Australia, is now part of a €50 billion program to investigate innovative technologies. The project is called Nano Data Centers (NADA) and is part of the future Internet initiative.

The goal: learn how to deliver data from the edge of the network instead of from costly, power gobbling Internet data centers.

Consumer driven to consumer powered Google is famous for using consumer grade technology to build the world's largest and most powerful data centers. This project asks "why don't we use the technology that consumers are already buying to build virtual data centers?"

We already do. The Folding@home project already uses the power of PS3's and PCs to perform computationally costly research on game consoles. P2P file sharing already spreads bandwidth among many servers.

The technology is there.

The business model ISPs could package up unused storage and CPU cycles from their subscribers and sell them to major content and service providers. The ISP could charge a third less than what those resources would cost in a data center, keep half the money for themselves and be give the other half back to us.

At my house we currently have close to a terabyte of unused disk and 7 cores that are typically no more busy than any other PC. And that is without a PS3.

Amazon charges about $.15 a gigabyte for storage space. If my ISP sold it for $.10 per gigabyte and gave me a nickel of that, I'd be getting $50 a month just for storage. That would just about cover my broadband connection.

Is it green? Google noted in their paper Powering The Warehouse Size Computer that provisioning power cost as much as 10 years of power. The power distribution units, the diesel generators, the power monitoring and isolation equipment, all cost a lot of money. And then there is the air-conditioning equipment.

Spread that across 150 million American homes and that power provisioning cost goes away. We'd still use the power but we wouldn't have to build and transport all that equipment to a data center.

The Storage Bits take America probably won't be able to take advantage of this concept due to the slow build out of our broadband infrastructure. But the basic idea is a good one.

Why build what consumers already own?

Comments welcome, of course.

Topics: Browser, Data Centers, Hardware, Storage, Telcos

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

5 comments
Log in or register to join the discussion
  • RE: Edge Centric Internet

    Sir,
    radarop
  • RE: The edge-centric Internet

    Your idea sounds good from the stand point of greenness. I would be curious as to how the ISP and myself would go about securing such data stored on a drive in my pc. Right now i have 5 pc's in my house and two laptops the pc's connect through a wired router into the cable modem while the laptops use a wireless router into the wired router for connectivity. How does the isp assure it's customers thet the data flowing through a system such as mine remain secure. Additionally how do I as the owner of the equipment ensure that no unacceptable content goes into the part of my system that I "lease" to the ISP. Example I don't believe I want the storage in my house being used to store picture for some weird porn site, etc.
    radarop
  • What happens . . .

    . . . when I go on vacation and turn my systems off for two weeks? What about backups? How green is it if everyone who participated in this had to have a live backup system? Would everyone be required to have a UPS?
    I'm really curious how any sort of quality and reliability could be enforced in this kind of system.
    aep528
  • RE: The edge-centric Internet

    My name is Mars From BDC International Co.,Ltd Shenzhen,China, we are professional manufacture in wireless communication field, Our main products as following: CDMA1x/GSM/GPRS/EDGE modem/HSDPA modem/Data card. If you can as distributer or reseller our in you side market,you'll gain the best great price and powerful support, You only do your market best,we do good product and after sell for you.

    Please you contact with me.
    Mr.Mars
    Tel:+86-755-83439478
    Mob:+86-13927484257
    E-mail:mars.gaolin@gmai.com
    MSN:salesbdcchina@hotmail.com
    Skype:salesbdcchina
    marsbdc
  • RE: The edge-centric Internet

    The reason to 'build what consumers already own' is simple: the storage that the consumers already own wouldn't be useful for many/most applications even if it were absolutely free (so you can kiss your fantasies about being paid $50/month for your available TB goodbye).

    To start with, Amazon doesn't charge "about $.15 a gigabyte for storage space": it charges about $0.15/month for *highly-reliable, highly available, high-performance* storage space. By contrast, your TB that you'd like to get similar rates for renting is not particularly reliable (hey, even if you coddle it just as assiduously as any enterprise data center would, who's to *guarantee* this for the consumer of the space in a manner that they can reasonably depend upon?), nowhere nearly as available (even if you maintain it redundantly - as Amazon does - and never turn your machine off, once again who's to *guarantee* this in some credible manner? if 'the system' instead maintains the data redundantly across multiple homes, how does it ensure that the paths to the mirrored copies or parity groups are completely independent, thus matching Amazon's 'no single point of failure' promise?), and nowhere nearly as performant (even if you're not competing for the disk's and ISP's bandwidth with your own activities, you're in a very select group if you've even got as much as 1 MB/sec of upload bandwidth available - only about 2% of the bandwidth of a single commodity disk in a data center).

    And then there's the management overhead - you know, the part that costs 5x - 10x as much as the storage itself does in typical environments. Whenever your local storage needs increase (or you simply power down your machine), some of that globally-shared data gets unceremoniously dumped - and 'the system' has to adjust accordingly. Among other things this means that it has to maintain considerably more redundancy than Amazon does to avoid data loss, because it doesn't just have to guard against hardware failure: it has to guard against continual random losses many orders of magnitude more frequent (because - just as with processor cycles used by things like the '@home' projects - users want the use of the storage that they own immediately, not after having waited a few minutes to allow the external system to make the transition at *its* convenience).

    And 'the system' that I've been talking about either has to exist in some centralized (though possibly distributed) location (which at least means that it's actively keeping track of where to find data) or has to use hash-based heuristics to search for data - which can require multiple probes when (as just described) data spontaneously evaporates at significant rates. So you're either back to a data center coordinating the whole mess or will have considerably difficulty matching Amazon in the area of service level agreements.

    After all is said and done, if Amazon's GB as provided is worth $0.15/month it's really difficult to imagine that your raw GB is worth even as much as $0.01/month after your ISP takes a cut. And for a great many users Amazon's GB is *not* worth $0.15/month (since for one or two month's rental fee they can *buy* that GB outright, with redundancy and all the performance benefits of locally-attached storage: even if you throw in their resulting management costs and they don't need the dramatically better performance, it should take them well under a year to come out ahead owning the storage).

    The workloads that currently succeed in leveraging consumer PCs globally are those which are especially well-suited (and which have been explicitly designed) to do so: lengthy independent computations with no strict time limits (the '@home' applications) and bulk read-only data transfers (again with no strict time limits - though whether these latter mechanisms would have evolved without the impetus of illegal sharing could be debated). Additional workloads (some kinds of backup come to mind) that may be amenable to such distribution will likely share at least the 'no strict performance requirements' characteristic - because if performance (especially interactive response, given that we're talking about Web applications here) *does* matter there'll be no substitute for keeping the storage reasonably dedicated and in close proximity to any significant required processing.

    In that vein, it's worth remembering how many applications deliberately leave 70% - 90% of their storage completely unused in order to obtain the performance they need (from disk arms rather than GB). That's how inexpensive storage is, and the move toward 2.5" drives may make even that level of conspicuous consumption reasonably energy-efficient as well.

    Do you really think that you can rent storage successfully under such conditions? "Too cheap to meter" became a disastrous joke for atomic power, but could approach the truth when it comes to storage capacity (as distinct perhaps from storage performance, but you're not talking about performance here).

    - bill
    - bill