Just two more keynotes till lunch, but these are larger ones (the previous speakers were 15 minutes apiece; these are 45). I'll try to take good notes; every conference always says they're going to make all the slides available afterwards but at best they usually get a 50% success rate on that.
First, Luiz Barroso from Google speaks on energy efficient operations. Now, server usage is only about 1% of total electricity consumption, but it doubled between 2000 and 2005. Measuring computing energy efficiency is harder than measuring a refrigerator or the like. Efficiency is defined as work done/energy used in physics terms. Efficiency for IT can be broken down into computing efficiency (work done/chip energy), server efficiency (chip energy/server energy) and server room efficiency (server energy/server room energy). Surveys show an average PUE (1/server room efficiency) of 1.83, and power supplies dissipate 25% of the power going to servers uselessly, more in PCs. Servers have poor (computing) energy efficiency in their most common usage range.
How do we address this? First, the power provisioning problem in the data center. Energy isn't the largest cost - building the center itself takes $10-$22 per watt, but the 10 year power is $9/watt. Efficiency saves on both. According to the uptime institute, the average cost breakdown is datacenter - 28%, electricity - 22%, hardware - 50%. (Software dwarfs this in many shops, I'll note.)