X
Business

Sun Services CTO says utility computing acceptance is slow going

Utility computing demands large and complex data centers which increasingly rely on statisticians to keep them running
Written by Tom Foremski, Contributor

I recently met with Daniel Berg Distinguished Engineer VP & CTO of Sun Services group, and we discussed some data center trends.

Mr Berg gets to see how IT applications are being used, and how IT demands are changing in terms of systems design, and the different types of skills needed to run large grid computing systems.

First of all, utility computing is still a ways off. Some companies have half-full data centers because they cannot get enough electric power to run additional IT systems. "Customers are very conservative--it will take a long time before they are comfortable with utility computing." says Mr Berg.

Oracle calls them "server huggers" and they are a formidable force in IT markets because many companies still want to run their own IT systems. However, as more organizations begin to use utility/grid computing for new projects, Mr Berg says that should help their comfort level with the new IT architectures.

As the data centers grow in complexity, with tens of thousands of servers, and storage systems and networks,  it is important to be able to predict IT failures.

"We can look at telemetry data and see patterns in the IT systems that are likely to lead to system downtime. We then alert customers and they can take measures to prevent problems."

The analysis of huge masses of telemetry data on equipment, applications, and a myriad number of complex interactions requires strong statistical analysis skills.  People with such skills are sometimes known as "Quants" in the financial sector, where the analyze the complexity of large trading systems.

Increasing complexity and size of data centers is now another area where Quants can apply their skills so that proactive measures can be taken to avoid IT system downtime.

"I'd rather be proactive because it takes a long time to analyze why a failure happened. But customers still want a lot of control and they sometimes ignore our warnings. And we can't take any preventive measures unless customers authorize them," says Mr Berg.

Also, access to electric power is a huge issue, he adds. Some companies have half-full data centers because they cannot get enough electric power to run additional IT systems.

Mr Berg says that Sun is accumulating a lot of knowledge and experience in managing the complexity of data centers. Sun however, wants to be a vendor to builders of data centers. But I wouldn't be surprised if Sun builds and operates more of its own commercial grid systems--because it knows how to keep them running. Mr Berg says that this could be one future scenario for Sun.

Operating data centers brings an additional benefit to Sun, it can feedback to the its microprocessor design teams. For example, its latest SPARC microprocessors have far less on-chip cache memory because applications weren't using it as much.

That means more space for microprocessor cores, which speed up scientific-type applications. Business IT applications however, need to be optimized to take full advantage of multi-core architectures.

Here is some related info on data centers and IT (thanks to Ari Entin!)

Addressing the expense problem in IT services: IDC's Tony Picardi estimates that in 2004, firms spent $750 billion repairing, maintaining and managing existing IT systems, absorbing up to 80% of IT budgets (draining funds away from user-facing innovation towards mere upkeep).

Gartner's Yefim Natis' counsel on managing IT complexity: "Complexity is a reflection of the functional richness of an IT environment. The objective of a well-run IT department is not to eliminate complexity (an unrealistic and potentially damaging undertaking), but to manage it. This means accepting complexity, but also channeling and organizing it in such a way that the costs of managing it are well-understood."

Editorial standards