These are tough times for hard sells. HP targeted its top accounts with UDC, but the size, cost and complexity of the system limited its appeal. It was server-focused at a time when network management is the bigger problem: Cisco was brought in last year to help out, but that didn't reflect well on the original vision.
In the end, there aren't that many places installing 100 servers a day. Those that do are looking for systems they understand backwards and know they can manage: nobody trusts clever robots. Not yet.
The principles behind utility computing are laudable but hardly exciting. Utility computing offers efficiency, flexibility and manageability, none of which any sane IT manager would throw out of bed -- but what modern server configuration doesn't claim to offer the same?
Utility computing does have some exciting characteristics. Over the next two to five years we will be moving into a world where virtualisation is in every processor, finally divorcing software from the hardware underneath. Grid computing will continue to get easier and cheaper. Both these technologies are central to utility computing, both are being heavily promoted, both have yet to prove useful outside specialist applications.
HP's failure does not necessarily mean the end of utility. It means that new technology, no matter how exciting and futuristic, will only succeed if it solves the problems the market is having today, at a price it can afford and with an acceptable level of risk. Hardly news, even if it came as a surprise to Carly.
Utility computing will revolutionise the data centre, but only from the ground up. Most people install one or two servers at once: when it makes sense for those to have utility computing features, then the time will be right. Until then, if you want to spend millions on a single, easy to manage and uniquely efficient installation, you might as well get a mainframe.