sWaPing out the data center

Wintel centralization has combined with Intel'scommitment to gigahertz space heaters to drivedata center space, cooling, and power costs up much faster than other costs
Written by Paul Murphy, Contributor
Regular readers will recall my provisional Sun T2000 pod configuration - basically an APC NetShelter VX 42U Enclosure with an integrated APC Smart-UPS RT 3000VA, dual Linksys 24 port switches, dual T3 routers, one 16GB, 1.2Ghz, T2000 processor with four internal 73GB disks and one 16 disk FC JBOD providing about 570GB in fully mirrored ZFS storage. As configured the pod should be extremely reliable - the T2000 itself has a wide range of RAS features and everything else is duplicated.

There's both physical room and adequate protected power as well as connectivity in the pod to add another T2000 and/or a second JBOD - perhaps a slower but larger 4TB accumulation of 16 250GB SATA drives.

How effective the current generation UltraSPARC T1 processor is depends on what you have it doing. You don't want to use it for 3D surface modelling from limited data, but give it the jobs it's best at - heavily multi-threaded web services, database CRUD transactions, or Domino based messaging - and benchmarks suggest it can compete head to head with an eight way, 3.2Ghz, Xeon cluster.

For the purposes of today's blog there are two interesting features to this POD. First, it isn't as energy efficient as it could be but still needs little more than a 220 volt outlet and free airflow to fit quietly into an office environment.

(As an aside, the reason it isn't as quiet or energy efficient as it could be is that roughly one third of its input power is simply turned into waste heat by the transformers inside each piece of gear. Since there's a simple engineering fix for this - replace those transformers with DC feeds directly from the output filter on the UPS - there's an OEM oppportunity here for somebody to provide fully preconfigured, and ultra high reliability, pods that burn less power, and produce less noise and heat.)

Secondly, if your workload fits, the pod offers a great way to avoid the daily backup hassle: set up data syncronization and failover across two or more machines in widely different locations and use part of the RAID array for deleted files. With that in place you may still want to back up critical data to a non production machine in a third location, but you don't need physical presence (or local media drives) where the pods are to do it.

Put those two things together, and what you should see is opportunity: because the drive to Wintel centralization has combined with Intel's commitment to gigahertz space heaters to drive data center space, cooling, and power costs up much faster than other costs -and putting pods into user areas makes all of those issues go away.

Imagine, for example, that you need to provide messaging support for at least two geographical clusters of at least a thousand employees each. With Microsoft's Exchange Server and Outlook technologies, your lowest cost option involves a rack or two of data center servers and a lot of connectivity simply because moving packets costs a lot less than moving support people.

Switch to Domino on the T1 pod, however, and you're best off putting mutually redundant pods in your two biggest locations. Doing that reduces your software cost, simplifies management, pushes power costs to the user department, reduces data center space and power needs, reduces bandwidth costs, provides faster access for users, guarantees system redundancy, and improves your credibility with user management - after all, it's their machine, in their offices, serving them.

It's a cool idea, and less limited than you may think because the next generation CMT machines won't have the floating point limitations affecting the T1 processors, and there really is no reason you can't apply the same logic to a lot of other functions. Right now the pod is a pretty good SAMBA server - pop in a SATA JBOD and a 4GB T1000 processor and you may add about $20K to cost, but you'll be able to handle Windows file and print for a thousand users without affecting your messaging operation -or paying for the power and space needed by the machine. Maybe next year you can throw your business intelligence processing on a second generation T series. Keep at it, and maybe two years from now you'll have distributed your entire processing inventory out to user areas: you'll still have the benefits of logically centralized processing and correspondingly reduced overall IT costs, but you won't need that expensive data center, your sysadmins will working directly with and for users, and the overall corporate power and communications bills will be headed downward

Editorial standards