X
Business

Will the electric utility be your server hoster? Or will Amazon be our electric utility?

On the heels of last week's coverage where I pointed out how the Uptime Institute's chief analyst Bruce Taylor had referred to salesforce.com as being unethical because, in his estimation, the CRM/SFA service provider experience two avoidable datacenter outtages in 2006, I've been e-mailing back and forth with salesforce.
Written by David Berlind, Inactive

On the heels of last week's coverage where I pointed out how the Uptime Institute's chief analyst Bruce Taylor had referred to salesforce.com as being unethical because, in his estimation, the CRM/SFA service provider experience two avoidable datacenter outtages in 2006, I've been e-mailing back and forth with salesforce.com APEX Developer Network blogger and evangelist Peter Coffee. Peter and I used to work together at PC Week Labs (now eWeek) and, even though he's working for a vendor now, I have a lot of respect for his insights into software development and hardware. He's a versatile expert and could have just as easily gone to work for AMD or Intel writing one of their blogs about chips.

Anyway, going back to the video we posted of Bruce Taylor deflowering saleforce.com's rosy reputation (Taylor has since written that "unethical" was probably an "ill-considered word" ... even though he said it that day, repeatedly), Coffee zeroed in on the part of that blog post that summarized Taylor's hypothesis that Moore's Law could be failing.

Before I go any further, Coffee and I are in agreement on the major issues. He felt that if anything, the vicious cycle that is Moore's Law is actually more profound (the opposite of failing) now that we're in multicore territory. So long as we're talking about the physics side of the equation, he's absolutely right. So much so, that the chip vendors are indeed worried that software developers won't be able to keep up with the performance benefits of multiple cores. In AMD's case, the chipmaker is hoping its peers will join together to multilaterally address the problem.

But from an economic point of view, there is a problem with Moore's Law on the horizon. It assumes an infinite supply of energy and cooling which is turning out to not be the case. As energy costs rise and cooling becomes an exacerbated function of both that rising energy cost as well as physical space, the so-called "halving" of the cost of compute power that happens when Moore's Law doubles chip performance every 18 months isn't really halving. On that point, not only did Coffee agree with me, he pointed me to a story he wrote in 2006 while still at eWeek that will probably resonate with more people today than it did when he originally wrote it:

The cost of powering and cooling a server over a four-year lifetime will soon exceed the cost of the server hardware, projects Luiz André Barroso, Google platforms engineering group leader.

In a paper published in the Association for Computing Machinery's Queue journal last fall, Barroso wrote, "One could envision bizarre business models in which the power company will provide you with free [server] hardware if you sign a long-term power contract."

The cost considerations are significant, of course, but so are the implications for infrastructure burden and external effects such as climate change. Barroso's analysis shows a flat-line trend in server performance per unit of power consumed, meaning that cheerful Moore's Law forecasts of server throughput turn into ice-cap-melting projections of watt-hours used.

Maybe that business model isn't so bizarre.  Next question: Instead of buying books from Amazon, might we be buying kilowatts instead (and getting free server/storage hosting on the company's Elastic Compute Cloud and S3 Net-based storage system?).  Is it really that far fetched?

Editorial standards