next gen nVidia to pull 200+ W

They make noise, too.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Straker
Posts: 657
Joined: Fri Jul 23, 2004 11:10 pm
Location: AB, Canada
Contact:

Post by Straker » Sun May 01, 2005 6:24 pm

EvoFire wrote:What are the typical wattage consumed by current cards anyways?? For like 9600's, 9800's, 6600's, X800's, 6800's?? I just wanna compare the power used today and tomorrow.
Try the search, look for the xbitlabs article I think. I forget exact wattages, but current ATI cards aren't that bad - I think the 9800XT uses more power under load than a stock X800XT, and all the current ATI cards are reasonably polite when idle. nVidia's power consumption got worse though.

EvoFire
Posts: 265
Joined: Tue Dec 21, 2004 10:06 pm
Location: Vancouver, BC
Contact:

Post by EvoFire » Sun May 01, 2005 8:10 pm

Seems like the top current ATi cards doesn't consume more than the top cards of yesteryear... That is indeed very interesting.

tay
Friend of SPCR
Posts: 793
Joined: Sat Dec 06, 2003 5:56 pm
Location: Boston, MA
Contact:

Post by tay » Sun May 01, 2005 8:43 pm

EvoFire wrote:Seems like the top current ATi cards doesn't consume more than the top cards of yesteryear... That is indeed very interesting.
BUT WE WILL HAVE 200WATT CARDS YOU HEAR!!!!!

Laurent
Posts: 25
Joined: Tue Jun 15, 2004 7:14 pm

Post by Laurent » Sun May 01, 2005 8:44 pm

DG wrote:I don't think that 200W cards would become a reality in the (near) future. How they will cool such a beast? Air cooling can't handle it. Maybe watercooling...
nici wrote:It will probably never happen, im just guessing here, but i would think that computers have pretty much reached the highest power-draw they will have. Newer CPUs are faster but still use less power, HDDs are likely to move towards smaller sizes soon, PSUs are getting more efficient.. I would imagine GPUs would follow this in a while because if the heat-dissipation continues to increase, it swill soon be impossible to cool the cards with something that only takes up one slot. Im also hoping graphics card manufacturers to incorporate something similar to Cool n´Quiet sometime in the near future..
Graphics processor performance is evolving much faster than CPU performance. A while back, GPU power dissipation was an order of magnitude lower than for CPU. These days it's about the same. If the GPU performance improvement trend continues, it's fairly clear that 200W is in sight and just a first step... And unlike CPUs, which are going in the slower/more in parallel direction, GPUs have no such architectural solution to the power dissipation problem: they are already doing that. They have chips that are both high clock speed and with lots of logic transistor from brute-force parallelism. They can still improve the clock speeds quite a bit (probably 2x or more going to semi- or full-custom methodology) and they'll keep putting more transistors in the same chip as process technology allows. Are they going to be able to do all that witin the same power envelop? I don't see how. Something has to change dramatically if we don't want to have 500W GPUs in 2010.
mathias wrote:I'm pretty sure DDR2 is completely different from GDDR2. It might have similarities with it, and with GDDR3, but it's a separate specification.
Actually, GDDR3 is derived from DDR2 and identical in all but a few things gearing it towards wider (x32 chips vs. x4, x8, x16 for CPU memory), point-to-point configurations (and of course 2x frequency).
alglove wrote:EvoFire, I have read in the rumor mill that AMD is considering extending the use of DDR, skipping DDR2, and going straight to DDR3 or XDR, as you have suggested. However, the GDDR3 currently in use by graphics cards is a type of DDR optimized for graphics cards.
DDR3 is still 2yrs away at least. DDR2-800 has yet to start appearing. DDR won't ship much faster than the current DDR-400 if ever. I don't know about XDR being used by AMD. I assume they'll support FBDIMM (fully-buffered DIMMs) at some point, for servers and maybe high end PCs/workstations. For low-to-mid-end PCs, I don't see how they could skip DDR2 (but who knows...). Single-core chips might not see much of an improvement with DDR2 (although DDR2-800 should be much faster than DDR-400). Dual-core chips should need the extra bandwidth provided by DDR2 (my guess again).

Laurent

lorvut
Posts: 7
Joined: Thu May 05, 2005 5:53 am

Post by lorvut » Thu May 05, 2005 6:20 am

It´s not strange that the cards need more power every new generation. The fact that ATI and Nvidia are competing is mostley the reason for this.

It´s important for them to be the performance leader. And most hardcore gamers only compare benchmarks.

Lets say it´s possible to design a card that gives you 85% performance using 50% power compared to a competing design. The sad truth is that they cant build that card if it´s intended to be the top of the line card. Not if the competing company prefers 15% more performance and build the other type of card. The more efficent card will not be considered any good when hardware sites that hardcore gamers test them and compare. They will not get the "performance leader crown". And that will hurt them alot...Because that "crown" helps them sell all.other cards too. Its important for the companies image and marketing to have the fastest card.

So I wouldn´t be surprised if this competition and way of testing and thinking means that we will see 300W and 400W graphics cards in the future. If performance is everything it will probably always be possible to increase it by using more power. And then we have SLI...`

Regards!

EvoFire
Posts: 265
Joined: Tue Dec 21, 2004 10:06 pm
Location: Vancouver, BC
Contact:

Post by EvoFire » Fri May 06, 2005 12:10 pm

that 85% card may very well become the new mainstream. Definitely my choice and most other people's choice in cards.

And to know that the X800's uses less power than 9800XT's, that at least is an achievement.

Green Shoes
*Lifetime Patron*
Posts: 477
Joined: Thu Feb 17, 2005 6:41 am
Location: Nashville, TN

Post by Green Shoes » Fri May 06, 2005 1:06 pm

How did I miss this interesting thread? :D I think only time will tell what happens with these cards (and not much time, as r520 and nv70 are both supposed to make cameos sometime in June). The next generation will hopefully get more efficient, but I woudn't put any money on it. CPUs have to a point....but I really think that's all going to change with the big push to dual-cores; I think we will see CPU TDP push way ahead of GPU again. Those things are going to require a lot of juice (dual Intel EE? sheesh :roll: ) If GPUs follow suit and become dual-core themselves....well, I'll be able to fry my bacon and eggs on the heatsink while I have a little morning gaming session.

I'm not sure what size fab process the next-gen cards are going to be, but both ATI and nVidia have had problems with 110-nm...it's just a matter of 130 being a nice mature process that they've worked most of the kinks out of. Eventually 110-nm will actually reduce TDP instead of increasing it...but by that time they'll be releasing 90-nm versions. :wink:

And, as lorvut said, there's always SLI....

Bitter Jitter
Posts: 141
Joined: Tue Jul 20, 2004 7:16 am
Location: Norwich, England

Post by Bitter Jitter » Fri May 06, 2005 1:20 pm

mathias wrote:
EvoFire wrote:I didn't know that... or probably didn't remember after reading, I have a rather bad memory. And 6600's leaky?? First time I've heard that, though, it might be true, I'm not sure.
It runs hotter than more powerful 6800NU's, almost as hot as 6800GT's, despite using a finner process. Seems pretty obvious to me.
The reason for the 6600 resonbly high power consumption might not be down to manufacturing process.

6600GT
GPU: 500Mhz (8 Pixel Pipelines)
Idle: 1. 431v, 18.47w
Load: 1.477v, 47.89w

6800
GPU: 350Mhz (12 Pixel Pipelines)
Idle: 1.22v, 16.96w
Load: 1.22v, 38.88w

6800GT
GPU: 350Mhz (16 Pixel Pipelines)
Idle: 1.35v, 23.41w
Load: 1.35v, 55.39w

The 6600GT has a higher Vcore of 1.477 compared to the 6800 at 1.22v. It also has a 500Mhz clock speed compared to 350Mhz. These two factors could be a adding to the reason why the 6600GT uses more power than the 6800.

mathias
Posts: 2057
Joined: Sun Jul 11, 2004 3:58 pm
Location: Toronto
Contact:

Post by mathias » Fri May 06, 2005 6:24 pm

No-one's ignoring those differences, the 6800's do somewhat more work, the 6800gt's do considerably more, does it really matter how they do it? C'mon, the 6800GT has twice as many pixel pipes and twice as many vertex pipes, that's not cheap heat-wise. And you can see from the overclocking tests there how little difference clock speed makes. Finer manufacturing process is supposed to help, here it does the opposite, so Nvidia needs to do some serious repairs to their plumbing.

EvoFire
Posts: 265
Joined: Tue Dec 21, 2004 10:06 pm
Location: Vancouver, BC
Contact:

Post by EvoFire » Fri May 06, 2005 7:50 pm

Heck, the plumbing for the big companies in graphics and CPU is messed up. Intel messed their 90nm up and nVidia messed their 110nm up...

yeha
Posts: 292
Joined: Thu Jan 13, 2005 7:54 pm

Post by yeha » Fri May 06, 2005 9:15 pm

i think the problems are more with tsmc (the actual manufacturer) than nvidia (the gpu designer).

both nvidia and ati use tsmc to make many (all?) of their gpus, all ati/nvidia do is the design work. their 110nm cards haven't performed as well as a die-shrink normally would (see 6600, x800xl), and i can't believe that design teams from both companies could botch so badly. more likely tsmc aren't using the low-k dieletrics or soi tech that amd so successfully did for their 90nm shift. if these gpus could be made using amd/ibm's tech, they'd most likely be a couple hundred mhz faster and only put out half the heat. hopefully tsmc licenses whatever tech they need to, however that seems unlikely as ibm are moving towards the job that tsmc itself does - manufacturing chips that other firms design. i doubt ibm would want to give a competitor such a helping hand.

increased heat and poor voltage/frequency improvements aside, the 110nm method does decrease per-unit costs (40% more chips per wafer), which means cheaper cards for us. hopefully tsmc

Bitter Jitter
Posts: 141
Joined: Tue Jul 20, 2004 7:16 am
Location: Norwich, England

Post by Bitter Jitter » Sat May 07, 2005 12:11 pm

It depends what you define as a mess up. From Intel with the Prescott and Nvidia with the 6600, the die shrink has reduced the manufacturing costs. Is that not the whole point of making the die smaller?

I don't see the massive failure of the 6600, its heat output is inline with the 9800pro but it is working at a higher clock speed, so is it a failure?

tay
Friend of SPCR
Posts: 793
Joined: Sat Dec 06, 2003 5:56 pm
Location: Boston, MA
Contact:

Post by tay » Sat May 07, 2005 1:14 pm

AFAIK, at least until recently TSMC had a low-k 130 and a plain jane 110. They do have a 90nm too but I'm not sure why ATI/Nv arent using it.

I'll eat my shorts.... NO NO I'll eat your shorts if the next gen parts consume 200W (non SLI of course).

lorvut
Posts: 7
Joined: Thu May 05, 2005 5:53 am

Post by lorvut » Sun May 08, 2005 6:23 am

EvoFire wrote:that 85% card may very well become the new mainstream. Definitely my choice and most other people's choice in cards.

And to know that the X800's uses less power than 9800XT's, that at least is an achievement.
I think maybe ATi is a bit more concerned about efficency. But they probably cant afford to give up the fight for the performance crown. Even if they know that Nvidias next card will have insane power needs and that they also will have to make a card (almost)like that to compete. Or at least a card that needs more power than they would have prefered if they had a choice.

Because the PR and marketing people know that without the fastest high end card they will sell less cards. Even low and mid priced cards will sell less in that case.

Also the 9700Pro card, when it was reliased, forced Nvidia to design for performance only. It was a chock for them. And I think it more or less made them give up all idéas about designing efficent cards. Probably the idéa is to use as much brute force as they need to compete and be faster if possible.

Personally I like the latest generation of Nvidia cards. But I didn´t like the 5800 and 5900 cards.

And it will be intersting to see what happens next generation.

Regards!

EvoFire
Posts: 265
Joined: Tue Dec 21, 2004 10:06 pm
Location: Vancouver, BC
Contact:

Post by EvoFire » Sun May 08, 2005 5:07 pm

I feel that graphics cards are playing catch up with games. Games seem to be always ahead of graphics cards in requirements. Even with a 6800U or X850XTPE, I don't think you can play D3 or HL2 with all eye candy on right? It was the same with last gen cards with Halo2 and Farcry.

lorvut
Posts: 7
Joined: Thu May 05, 2005 5:53 am

Post by lorvut » Mon May 09, 2005 2:25 am

EvoFire wrote:I feel that graphics cards are playing catch up with games. Games seem to be always ahead of graphics cards in requirements. Even with a 6800U or X850XTPE, I don't think you can play D3 or HL2 with all eye candy on right? It was the same with last gen cards with Halo2 and Farcry.
Yes. Thats often true. Some game developers want to make a graphics engine that can last for several years. They want it to look better when new faster cards that support more advanced graphics are released in the future. And that means that even if you own a high end card you can´t use the highest quality graphics settings when the game is new. You might need the fastest card 2-5 years from now for that. Not so strange since it takes several years to develop a new graphics engine.

Regards!

Green Shoes
*Lifetime Patron*
Posts: 477
Joined: Thu Feb 17, 2005 6:41 am
Location: Nashville, TN

Post by Green Shoes » Mon May 09, 2005 6:24 am

good point, lorvut. This is especially true for developers like Valve and Id who plan on licensing their code/graphics engine out to other developers....the original Half-Life code was licensed for years before a better one came along.

smiechoo
Posts: 13
Joined: Wed Dec 29, 2004 10:56 am

Post by smiechoo » Tue May 17, 2005 4:38 pm

[quote="Splinter"]CPUs and GPUs are very very different.
GPUs only do very specific things. They work with triangles and pixels. Shading and texturing. They can't do anything but graphics, they're only designed to do graphics, and they do it well.[/quote]

I've read that it is possible to transform sound into triangles and textures and then modify it using the GPU. Therefore it is possible to use it for something else than the graphics, you just have to feed it data it can digest... and it does it faster than the CPU...

EvoFire
Posts: 265
Joined: Tue Dec 21, 2004 10:06 pm
Location: Vancouver, BC
Contact:

Post by EvoFire » Tue May 17, 2005 8:10 pm

smiechoo wrote:
Splinter wrote:CPUs and GPUs are very very different.
GPUs only do very specific things. They work with triangles and pixels. Shading and texturing. They can't do anything but graphics, they're only designed to do graphics, and they do it well.
I've read that it is possible to transform sound into triangles and textures and then modify it using the GPU. Therefore it is possible to use it for something else than the graphics, you just have to feed it data it can digest... and it does it faster than the CPU...
Translating it into something that the GPU can digest basically means we are adding another chip and another heat source right?? I don't think I like that.

tay
Friend of SPCR
Posts: 793
Joined: Sat Dec 06, 2003 5:56 pm
Location: Boston, MA
Contact:

Post by tay » Tue May 17, 2005 9:51 pm

EvoFire wrote:Translating it into something that the GPU can digest basically means we are adding another chip and another heat source right?? I don't think I like that.
This thread is going from the misinformed to the absurd. GPUs do floating point (used to be integer only till recently) operations real fast. Audio requires floating point operations. Its not too hard to put an audio processor front/back end on there, the question is why.
And no we are not getting 200watt GPUs in the next gen (R520,G70).

halcyon
Patron of SPCR
Posts: 1115
Joined: Wed Mar 26, 2003 3:52 am
Location: EU

Post by halcyon » Tue May 17, 2005 10:40 pm

Preliminary (uncofirmed via HKEPC) info on the next generation Geforce 7800 / 7800 GTX:

# G70 comes with 3 models; GTX, GT and Standard
# Single card requires min. 400W PSU with 12V rating of 26A
# SLI configuration requires min. 500W PSU with 12V rating of 34A

It's not 200W pull yet, but it is pretty high if those are anywhere close to reality (asuming nVidia learned from their previous XXX W PSU required blunder).

yeha
Posts: 292
Joined: Thu Jan 13, 2005 7:54 pm

Post by yeha » Tue May 17, 2005 11:03 pm

the best part of all this is that the limited space card manufacturers actually have to exhaust all this heat, especially for sli configurations, puts an upper limit on the realistic amount of power they can ever draw. you just can't have two cards in sli positions putting out 120+ watts of heat each, without extremely extravagant cooling systems - look at how much stress a 100+ watt prescott puts on something as big as the xp-120.

i think 100 watts is as high as future cards will go, any more than that and you effectively alienate the mass market. and i do believe that the market for 6800- or x850-class cards is 'mass' compared to the market for 120 watt gpus and the associated cooling problems. such cards would be relegated to alienware-type boutique custom pcs, someone like dell wouldn't want to go near it.

and on the point of gpus processing triangles and polygons, yes it's a tad off. gpus today are capable of a lot more than graphics processing, have a read through the gpgpu project page for the kind of jobs people are looking to offload from the cpu onto modern gpus - some exciting possibilities!

Green Shoes
*Lifetime Patron*
Posts: 477
Joined: Thu Feb 17, 2005 6:41 am
Location: Nashville, TN

Post by Green Shoes » Wed May 18, 2005 5:36 am

halcyon wrote:...# Single card requires min. 400W PSU with 12V rating of 26A
# SLI configuration requires min. 500W PSU with 12V rating of 34A

It's not 200W pull yet, but it is pretty high if those are anywhere close to reality (asuming nVidia learned from their previous XXX W PSU required blunder).
I think they are still guessing quite high (just like AMD and Intel are) b/c they have to assume that a lot of people are going to use a really crappy 400W power supply...if you're using something well-built with a good efficiency, i.e. Seasonic et al, you can most likely get away with much less than that. My DFI nF4 mobo supposedly "requires" a 400W ATX power supply....I know a great many people that drive a nice system with less than that.

pony-tail
Posts: 488
Joined: Sat Aug 23, 2003 4:39 pm
Location: Brisbane AU

Post by pony-tail » Wed May 25, 2005 2:52 pm

Oh well bring on the 200watt GPUs I'll put 2 in sli add a dual core pressHot or maybe even a couple of top end dualcore Xeons.
I'll use it to warm my dinner .
I can retire my toaster oven then (it's only 500watts )

StarfishChris
Posts: 968
Joined: Fri Jan 07, 2005 7:13 pm
Location: Bristol, UK
Contact:

Post by StarfishChris » Wed May 25, 2005 3:10 pm

pony-tail wrote:Oh well bring on the 200watt GPUs I'll put 2 in sli add a dual core pressHot or maybe even a couple of top end dualcore Xeons.
I'll use it to warm my dinner .
Mmm, fish & microchips!

Green Shoes
*Lifetime Patron*
Posts: 477
Joined: Thu Feb 17, 2005 6:41 am
Location: Nashville, TN

Post by Green Shoes » Thu May 26, 2005 5:43 am

StarfishChris wrote:
pony-tail wrote:Oh well bring on the 200watt GPUs I'll put 2 in sli add a dual core pressHot or maybe even a couple of top end dualcore Xeons.
I'll use it to warm my dinner .
Mmm, fish & microchips!
:lol: :lol: :lol:

But does silicon really taste better with a little malt vinegar spritzed on it?

pony-tail
Posts: 488
Joined: Sat Aug 23, 2003 4:39 pm
Location: Brisbane AU

Post by pony-tail » Thu May 26, 2005 9:52 pm

Do not forget the salt !

pony-tail
Posts: 488
Joined: Sat Aug 23, 2003 4:39 pm
Location: Brisbane AU

Post by pony-tail » Thu May 26, 2005 10:14 pm

On a slightly more serious note .
My lan box(not a silent machine) has an AGP 6800 ultra ,it has a 2 slot cooler ,a 5000 rpm (a guess) fan- heats up the case pretty bad and gets so hot that you could not hold your finger on it for more than a few seconds .
So what is a 200 watt GPU going to be like ?
this is heading into insanity
are we going to need 3phase power for our gaming PCs with automotive style radiators to cool them . Do we really need the power that they are promoting? (pushing)
Reality check! I have 5 PCs for different purposes , I have to pay ever increasing power bills -
are computers going to be the next target of the greenies and consevationists.
The industry is incredibly wastefull , a 4 year old PC is virtual junk and becomes landfill and or scrap ! they have to come up with something they can sucker people (lots of people) into forking lots of cash for - more power is always easy to market .
A CPU or GPU with the same performance but more efficient and more expensive would be a hard sell to mr. John Citizen, head of corporate procurement, at the local city centre.
Remember the bean counters always win . shareholders make sure of it
got to get their dividends.

Tzupy
*Lifetime Patron*
Posts: 1561
Joined: Wed Jan 12, 2005 10:47 am
Location: Bucharest, Romania

Post by Tzupy » Sun May 29, 2005 8:47 am

If these specs are correct - 8 vertex and 24 pixel pipes, 430 / 1400 MHz - manufactured at 0.11 microns, then my guesstimate is about 120W. If the core would run at 500 MHz, it would be about 160W, I think. Maybe even lower, so the 200+ watts is BS.
Remains to be seen how much improved are the pipes...

Bitter Jitter
Posts: 141
Joined: Tue Jul 20, 2004 7:16 am
Location: Norwich, England

Post by Bitter Jitter » Mon May 30, 2005 2:51 pm

pony-tail wrote:The industry is incredibly wastefull , a 4 year old PC is virtual junk and becomes landfill and or scrap ! they have to come up with something they can sucker people (lots of people) into forking lots of cash for - more power is always easy to market .
A CPU or GPU with the same performance but more efficient and more expensive would be a hard sell to mr. John Citizen, head of corporate procurement, at the local city centre.
This is a major problem right now. Throwing out useful computers because they take a few more seconds to load up windows is crazy but its the capitalist world we live in.

Once the environmentalist have killed the SUV, the gamers graphics card is next on the hit list! :twisted:

Post Reply