There was an interesting story about Google building their own servers with customized power supplies. One issue that was raised was the fact that Google's application had enough fault tolerance in it that a downed server or more would not bring the entire service down. Therefore Google could get away with the use of "white boxes" which are perceived to be less reliable than name brand servers. Note that I say "perceived" because my experience with them is that they really weren't any less reliable than a name brand server, the only difference was that it may have been a little more difficult to get an onsite support contract for them. But if you have enough well-designed redundancy in your systems, that shouldn't really be a problem because you can always take your time getting the parts replaced on warrantee.
The other interesting point raised by Google's president of operations Urs Hölzle was the inefficiency of the power supplies in most servers. Google uses custom power supplies to increase energy efficiency which has a double effect on power usage. You're not using as much power to run the servers and you're not putting out as much heat which means less cooling requirements for the data center. As Hölzle puts it:
“It’s not hard to do. That’s why to me it’s personally offensive [that standard power supplies aren’t as efficient]”
Since the article linked didn't elaborate further, I will take a stab at it to give you the likely explanation. There are 2 main factors that determine the efficiency of a power supply. The first factor is the design of the power supply. More expensive power supplies have "Active PFC" (Power Factor Correction) designs that produce less heat which in turn waste less energy. Most well designed servers will use Active PFCs to minimize waste. The other big factor is how heavily loaded a power supply is. The closer to 100% loading you get, the more efficient a power supply is. At full load, a well designed power supply can be 95-99 percent efficient meaning that for ever 100 watts used 1-5 watts is wasted in the form of heat. But at half load, the exact same power supply might only be in the 60-70 percent efficiency range.
This second factor is precisely where Server manufacturers have their hands tied. Server manufacturers cannot sell you a server where the power supply is operating at peak capacity. Not only is there no room to expand, but there is some risk associated with an underpowered power supply. Server manufacturers will always sell you a Server with a power supply that is probably 2-4 times bigger than what is actually needed to be overly conservative about safety. So a server that might only peak at 250 watts will have a 600 watt power supply. Any Server manufacturer that sold you the same server with a 300 watt power supply will probably be laughed out of business because of the lack of a safety margin. So the end result is that general purpose servers with over sized power supplies will tend to waste a lot more energy than a custom server with a custom power supply that isn't nearly as over sized.