Article: Google servers and power efficiency

The forum for non-component-related silent pc discussions.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
blandoon
Posts: 35
Joined: Tue Jan 31, 2006 11:07 am
Location: Eugene, OR USA

Article: Google servers and power efficiency

Post by blandoon » Mon Jul 03, 2006 7:38 am

Someone may have posted this before, but I couldn't find it... anyway, this article has quotes from a Google representative who states that they build their own servers to get better thermal and power efficiency. Here's an excerpt:
Another reason that Google builds its own servers is equally simple: it can save costs on power consumption.

Energy efficiency is a subject Holzle speaks passionately about. About half of the energy that goes into a data center gets lost due to technology inefficiencies that are often easy to fix, he said.

The power supply to servers is one place that energy is unnecessarily lost. One-third of the electricity running through a typical power supply leaks out as heat, he said. That's a waste of energy and also creates additional costs in the cooling necessary because of the heat added to a building.

Rather than waste the electricity and incur the additional costs for cooling, Google has power supplies specially made that are 90% efficient. "It's not hard to do. That's why to me it's personally offensive" that standard power supplies aren't as efficient, he said.

While he admits that ordering specially made power supplies is more expensive than buying standard products, Google still saves money ultimately by conserving energy and cooling, he said.
So there are two take-away lessons here: First, power supply efficiency will only get more important for server builders, especially in gigantic datacenters like Google's. Second, someone somewhere is building 90% efficient server-class power supplies, and I want one. :)

Here's the link: http://www.networkworld.com/news/2006/0 ... s-for.html

DanW
Posts: 190
Joined: Fri May 19, 2006 3:20 am
Location: UK

Post by DanW » Mon Jul 03, 2006 9:14 am

hhmmm tasty, 90% efficient PSUs. Think about how much of a premium you'd have to pay though :( Hopefully we'll see a general increase in PSU efficiency in the coming years.

Less heat, less noise, more power, more money?

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Mon Jul 03, 2006 9:47 am

Google is so cool.

I also think I remember reading somewhere that Google uses Opterons in their datacenters, because of the higher power efficiency than Intel's Xeon processors.

JazzJackRabbit
Posts: 1386
Joined: Fri Jun 18, 2004 6:53 pm

Post by JazzJackRabbit » Mon Jul 03, 2006 9:50 am

DanW wrote:hhmmm tasty, 90% efficient PSUs. Think about how much of a premium you'd have to pay though :( Hopefully we'll see a general increase in PSU efficiency in the coming years.

Less heat, less noise, more power, more money?
Well, if it's true that electricity cost offsets price premium on extra efficient power supplies for google, the same should hold true for home user too.

Although I doubt that those are really 90% efficient PSUs.

peteamer
*Lifetime Patron*
Posts: 1740
Joined: Sun Dec 21, 2003 11:24 am
Location: 'Sunny' Cornwall U.K.

Post by peteamer » Mon Jul 03, 2006 10:02 am

JazzJackRabbit wrote:Although I doubt that those are really 90% efficient PSUs.
Why?

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Mon Jul 03, 2006 10:18 am

When you think about it, 90% efficiency isn't that unreasonable. The PicoPSU-120 and EDac 120w brick were able to achieve 87.1% efficiency on the SPCR testbed. Now consider that this combo is designed to cope with a wide variety of loads, 0-120w, 0-6A on +3.3V, 0-6A on +5V, etc., and any combination thereof. By having the power supplies custom built, Google can tell the engineers "ok, our system draws x watts, x amps on x rails, build us a power supply with efficiency optimized for these loads."

Also consider that computers are becoming increasingly dependent on +12V. So another example with the PicoPSU, assume 90% of the PC's power is provided on the +12V line. The PicoPSU simply passes +12V, the external brick is responsible for conversion, rectification, and regulation. This means that on a modern system with high +12V and a design similar to the PicoPSU, your efficiency would effectively be determined by the external brick. In the future, I wonder if computers will ditch low-voltage/high-current +3.3V and +5V lines, replacing them with lower-current +12V. This would surely mean power supplies with 90%+ efficiency. Just some stuff I've been thinking about.

jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Post by jaganath » Mon Jul 03, 2006 10:36 am

In the future, I wonder if computers will ditch low-voltage/high-current +3.3V and +5V lines, replacing them with lower-current +12V.
Good point, what components still use 3.3V and 5V? It's only hard drives and optical drives isn't it? ISTR some components also use the +5Vsb line as well, but shouldn't be too hard to phase out. There's no doubt that there are efficiency gains to be had from doing so (and cost savings too no doubt for PSU manufacturers).

Rusty075
SPCR Reviewer
Posts: 4000
Joined: Sun Aug 11, 2002 3:26 pm
Location: Phoenix, AZ
Contact:

Post by Rusty075 » Mon Jul 03, 2006 11:12 am

From informal conversations that I've had with PSU manufacturer rep's 90%+ efficiency doesn't appear to be hard to achieve from a technical standpoint, it is purely market economics that prevent it. Would you pay double for a PSU that is 90% efficient compared to one that is 80%? Well, we probably would, but we're kinda odd. :wink: Lifespan cost would make it pay for itself, but most consumers don't really care that much. The big OEM's that drive the bulk of the PSU market don't care at all, unless government regulations like Energy Star force them to. (personally, I think computer sellers should have to list the dollar value of the machine's energy consumption on the label just like refridgerators do)

But the market is changing...PSU manufacturers now advertise their efficiency. Three years ago none of them did. Now that its a differentiating feature it will continue to rise.

qviri
Posts: 2465
Joined: Tue May 24, 2005 8:22 pm
Location: Berlin
Contact:

Post by qviri » Mon Jul 03, 2006 11:16 am

jaganath wrote:Good point, what components still use 3.3V and 5V? It's only hard drives and optical drives isn't it?
PCI cards, AGP cards...

blandoon
Posts: 35
Joined: Tue Jan 31, 2006 11:07 am
Location: Eugene, OR USA

Post by blandoon » Mon Jul 03, 2006 11:44 am

In a laptop, almost everything runs on the lower voltages - for instance, laptop hard drives run on +5V, whereas their desktop counterparts run on +12V (for the motor at least). If you think about it, you could probably eliminate the +12V rail out of a system more easily, if you started from the ground up.

JazzJackRabbit
Posts: 1386
Joined: Fri Jun 18, 2004 6:53 pm

Post by JazzJackRabbit » Mon Jul 03, 2006 12:33 pm

peteamer wrote:Why?
Simply judging by the current offerings. I don't know much about power supplies and may be wrong, but even the best power supplies only top 87% under optimal conditions. And given the errors in measurements, or even erroneous methodologies (spcr had to redesign the way they test PSUs for example because they were getting overly optimistic figures), I would say google's 90% may actually be 85% in reality. Perhaps I got interpreted wrong when I said "I doubt it", I didn't mean to say 90% couldn't be achieved theoretically, I just meant that google's PSUs aren't likely to be 90% efficient, I'm more inclined to believe their efficiency is about the same that current top consumer PSUs like fsp zen, antec phantom and Seasonic offer.


Rusty075 wrote:Lifespan cost would make it pay for itself, but most consumers don't really care that much. The big OEM's that drive the bulk of the PSU market don't care at all, unless government regulations like Energy Star force them to. (personally, I think computer sellers should have to list the dollar value of the machine's energy consumption on the label just like refridgerators do)
That's an interesting thought, I know I said it might be worth it for consumer but it actually might not. Think about the difference between user and google in terms of heat dissapation and in terms of cost associated with it. In terms of electricity costs, google runs their servers 24/7 while typical user runs only 8 or less hours a day (out of 24 hours in a day 8 hours go to sleep, 8 hours to work and only 8 hours when you are actually home where it would make sense to keep your PC on). So any gain in electrical bills will be one third of what google has. Say google saves 12 cents a day on electricity, but since typical user runs their computer only 8 hours a day, they will only save 3 cents a day (a purely hypothetical figure, just to illustrate the point). In terms of heat dissapation, any gain by user is also not as significant. Computers do dissapate heat, and my room with computer on is typically a degree or two fahrenheight higher than average temperature in the house. However, once again, google runs their computers 24/7 and they usually have lots of computers in the same facility, so their additional costs on air conditioning to keep the facility cool are much higher in relation to what typical user spends on air conditioning the house. So does it make sense for the regular user to pay twice for the power supply from an economical standpoint? I dunno...

In Pico PSU review the difference between Pico and seasonic PSU was about 10W, so lets say the difference with google PSUs is 20W compares to seasonic S12. That's 175KW/Hr savings for an entire year. Given roughly $0.10 for 1KW/Hr electricity cost for resedential sector that's $17.5 in savings for the year. Does it make sense to pay twice extra for resedential user? At best I think it's a draw because I never keep PSu for more than 3-4 years because the standards keep changing.

nici
Posts: 3011
Joined: Thu Dec 16, 2004 8:49 am
Location: Suomi Finland Perkele

Post by nici » Mon Jul 03, 2006 10:55 pm

Im with Rusty on this one..

I took some prices from the local power companys website, not sure how this really works but if i understood correctly its about 4,6€ a month and 0,083€/kWh. or 8,3€ cents.

Now my computers while folding draws about 200W each from the wall with a Phantom 350 and Neo HE, wich means it draws 1kWh/5 hours. Now if one month is 30 days, or 720hours, its 720/5=144kWh/month for a computer wich turn into 11.95€.

Now if they were 10% more efficient, it would take 5,555 hours to use 1kWh. 720/5,555=129,6kWh.

144-129,6=14,4kWh saved per month for one computer, wich is 14,4*0.083=1,1952€ saved in a month, or 14,34€ a year.

Assuming one uses the PSU for three years and price for electricity stays constant, it´s 43€ saved.

Oopsie, iI forgot that my computers only draw 200W when the GPU is loaded too, just CPU load and GPU idling its more like 180W max. If anyone want to redo my calculations, go ahead.

Anyway, the Phantom and Neo HE are already pretty efficient, and because most home computers wont draw 200W on just CPU load, you wouldn´t actually save even the 43€ in three years. Probably closer to 20€ over three years. And you would have to run 24/7 like i do.

I can imagine this being useful when running hundreds or dual cpu servers, but cost saving for a home user is not worth more than a few bucks at purchase.

That is without thinking about the environment, if everyone had a 10% more efficient computer it would be pretty damn significant. Most PSUs are still probably closer to 70% effiency, so the change would be quite noticeble if millions of computers were 90% efficient instead of jsut 70%.

And im not sure i like Google growing continuously, the thought of a search engine monopoly is quite scary to say the least.. They already censored results in China.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Post by MikeC » Mon Jul 03, 2006 11:46 pm

I've had many chats like the one Russ had, with many PSU makers. Economic considerations are foremost in how much more efficient computer PSUs will get. The best already have >85% efficiency, but to break past 90% with any consistency under real conditions will cost a LOT more. Probably double. It's diminishing returns as you get closer to the state-of-the-art. And as others have already discussed, the financial advantages for such high efficiency in a home or office PC are not compelling, although the argument is stronger for a high load application like a busy web server. It's likely that unless there's a major breakthrough in low cost, high efficiency componentry, we've reached a plateau for general PC PSU efficiency.

The real question is whether 85% efficient PSUs will see widespread use in PCs any time soon. At this point in time, I seriously doubt that even 10% of the world's computers are running on 80% efficient PSUs. Maybe 5%? 2%?

A new 12V-only, single-line form factor might make 90% more feasible, but the PSU makers will probably fight this -- what's to differentiate one brand from the next if the devices are so simple?

lm
Friend of SPCR
Posts: 1251
Joined: Wed Dec 17, 2003 6:14 am
Location: Finland

Post by lm » Tue Jul 04, 2006 3:28 am

There's been a trend toward less different voltages in the PSUs and I wonder if Intel says the next form factor has just +12V and nothing else, can the PSU makers really protest against that? It will probably happen one voltage at a time until there's just one left, just like it's been this far.

I mean, come on, there's just 3 voltages left anymore, there used to be sth like 5? I see a trend here.

Worker control
Posts: 126
Joined: Wed Nov 16, 2005 12:16 pm

Cooling the datacenter

Post by Worker control » Tue Jul 04, 2006 6:28 pm

One factor that is likely different for a datacenter is that all the power used (and turned into heat) has to then be removed by an AC system. And AC systems chew up power like anything. So the cost savings from a more efficient PSU will be substantially larger than the simple calculation would suggest.

nix-madness
Posts: 26
Joined: Sun May 14, 2006 11:18 pm
Location: Singapore

Post by nix-madness » Tue Jul 04, 2006 7:47 pm

During my last job, we rented racks at an offsite data centre to host our ERP system, e-mail and internet gateways. During the provisioning period, the data centre asked if we'll need DC supplies. As our systems have have it's own PSUs (the AC to DC type), we only requested for AC power to be supplied to our racks. So, we did not ask any details about DC supplies to the rack.

So, IMHO, Google might have DC being supplied to their racks at their data centres. This would, thus, also help to reduce on the number of PSUs for each machine, and therefore cut down on the power loss due to inefficiencies of each individual PSU. But that's just my guess.

My two cents.

JazzJackRabbit
Posts: 1386
Joined: Fri Jun 18, 2004 6:53 pm

Re: Cooling the datacenter

Post by JazzJackRabbit » Tue Jul 04, 2006 9:14 pm

Worker control wrote:One factor that is likely different for a datacenter is that all the power used (and turned into heat) has to then be removed by an AC system. And AC systems chew up power like anything. So the cost savings from a more efficient PSU will be substantially larger than the simple calculation would suggest.
I already touched on that. If you're running datacenter with hundreds of computers in the same room, yes AC costs will be high. However for a typical user with only one or two computers in the same room it hardly makes any difference.

jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Post by jaganath » Wed Jul 05, 2006 1:20 am

I already touched on that. If you're running datacenter with hundreds of computers in the same room, yes AC costs will be high. However for a typical user with only one or two computers in the same room it hardly makes any difference.
The economic/financial case for more efficient power supplies at the individual consumer level is weak at current energy prices; however for silencing it is very important, as in almost all applications heat = noise.

klankymen
Patron of SPCR
Posts: 1069
Joined: Thu Aug 04, 2005 3:31 pm
Location: Munich, Bavaria, Europe

Re: Cooling the datacenter

Post by klankymen » Wed Jul 05, 2006 3:20 am

JazzJackRabbit wrote:
Worker control wrote:One factor that is likely different for a datacenter is that all the power used (and turned into heat) has to then be removed by an AC system. And AC systems chew up power like anything. So the cost savings from a more efficient PSU will be substantially larger than the simple calculation would suggest.
I already touched on that. If you're running datacenter with hundreds of computers in the same room, yes AC costs will be high. However for a typical user with only one or two computers in the same room it hardly makes any difference.
in his post AC and DC are referring to alternating current and direct current methinks...

JazzJackRabbit
Posts: 1386
Joined: Fri Jun 18, 2004 6:53 pm

Re: Cooling the datacenter

Post by JazzJackRabbit » Wed Jul 05, 2006 4:36 am

klankymen wrote:
JazzJackRabbit wrote:I already touched on that. If you're running datacenter with hundreds of computers in the same room, yes AC costs will be high. However for a typical user with only one or two computers in the same room it hardly makes any difference.
In his post AC and DC are referring to alternating current and direct current methinks...
In this particular case (or piece) he was referring to Air Conditioning. Hundreds of computers cramped in one building will generate a lot of heat and must have powerful airconditioning system to keep temperature in check. However I don't think it matters for resedential sector as regular user only has only one computer in his room and any heat dissapated by it is pretty much negligible.

DanW
Posts: 190
Joined: Fri May 19, 2006 3:20 am
Location: UK

Re: Cooling the datacenter

Post by DanW » Wed Jul 05, 2006 6:26 am

JazzJackRabbit wrote:
klankymen wrote:
JazzJackRabbit wrote:I already touched on that. If you're running datacenter with hundreds of computers in the same room, yes AC costs will be high. However for a typical user with only one or two computers in the same room it hardly makes any difference.
In his post AC and DC are referring to alternating current and direct current methinks...
In this particular case (or piece) he was referring to Air Conditioning. Hundreds of computers cramped in one building will generate a lot of heat and must have powerful airconditioning system to keep temperature in check. However I don't think it matters for resedential sector as regular user only has only one computer in his room and any heat dissapated by it is pretty much negligible.
Even a few computers in a room will do this. Small server rooms still get hot. I think we've got about max 30 servers on site here and the room needs 2 AC units.

So if OEM suppliers like...Dell and HP push for more efficient PSUs then they can charge a bit more for their kit which wont really matter to the big companies. I'm sure they can figure out what saw of voltages they need to each line. With the ammount of servers they produce each year It wont be too hard, will it? It's be nice to have a quieter server room!


Or am I just talking jibberish here? :lol:

klankymen
Patron of SPCR
Posts: 1069
Joined: Thu Aug 04, 2005 3:31 pm
Location: Munich, Bavaria, Europe

Post by klankymen » Wed Jul 05, 2006 8:10 am

yeah, I know about the issue of air conditioning, but read his post more closely.

He is saying a server-room that gets DC power would produce less heat than one that is fed AC. Would air conditioning make sense in this sentence?

googles computers are able to be more efficient since they get direct current instead of air conditioning?

somehow that sounds weird....

Hifriday
Patron of SPCR
Posts: 237
Joined: Thu Aug 05, 2004 3:32 pm

Post by Hifriday » Wed Jul 05, 2006 8:18 am

Servers running off DC does sound interesting. Wouldn't it be easier and cheaper to build a large high-efficiency AC/DC converter (to supply a bank of servers) than many units for each individual server? And how about the UPS, wouldn't their data center also have backup power? Wouldn't running off DC further eliminate any additional loss going from battery (DC) back to AC during brownouts? Also aren't most UPS "on-line", isn't there further efficiency loss associated with this? Plus an added benefit of having a DC UPS (ie one very large laptop battery) is that even if the AC/DC adaptor fails, the system will not be affected.

Is there any good reason why a data center shouldn't be run off DC?

As to Google's claimed 90%, well an actual 86% might qualify with some liberal rounding.

ShagMan
Posts: 24
Joined: Mon Jun 26, 2006 7:25 am

Post by ShagMan » Wed Jul 05, 2006 9:20 am

Wouldn't a large, dedicated AC-> DC convertor for each voltage needed, be more effecient than having that all in one power supply?

jaganath
Posts: 5085
Joined: Tue Sep 20, 2005 6:55 am
Location: UK

Post by jaganath » Wed Jul 05, 2006 9:25 am

Is there an echo in here? :lol:

blandoon
Posts: 35
Joined: Tue Jan 31, 2006 11:07 am
Location: Eugene, OR USA

Post by blandoon » Wed Jul 05, 2006 12:56 pm

There have been some rumblings in the IT datacenter world lately about moving to DC power distribution within the datacenter. A lot of telco equipment already runs on +48V... I guess, under this architecture, you'd have just a few very large AC power supplies, which could be made highly efficient more economically than hundreds of little ones. In a way, this is what people are trying to accomplish by moving to a "blade server" architecture, in which you have several little computers running on one big AC power supply.

nix-madness
Posts: 26
Joined: Sun May 14, 2006 11:18 pm
Location: Singapore

Post by nix-madness » Wed Jul 05, 2006 11:36 pm

blandoon wrote:There have been some rumblings in the IT datacenter world lately about moving to DC power distribution within the datacenter. A lot of telco equipment already runs on +48V... I guess, under this architecture, you'd have just a few very large AC power supplies, which could be made highly efficient more economically than hundreds of little ones. In a way, this is what people are trying to accomplish by moving to a "blade server" architecture, in which you have several little computers running on one big AC power supply.
When we moved some of our systems to the offsite datacentre, it was like 2.5 years ago. We were offered DC to the racks at that time. So, it's been awhile since DC power distribution are available in datacentres. However, I believe there were no takers back then as most systems come with it's own AC>DC PSUs - there are only a few equipment that I can think of that has factory fitted DC connectors at it's back (eg Cisco 29xx switches). Also, most IT managers were not willing to change (ie remove the individual PSUs on their equipment) for it'll definately void the warranty of their equipment.

On top of that, the issues of energy cost (due to poor PSU efficiencies and heating) were not in the minds of most CIOs / IT managers. Also, noise generated by those fan for cooling are least in their concern, as the equipment are locked away in a room - out of sight, out of thought.

Post Reply