Bad news for future AMD cpus cooling - 105W TP

Cooling Processors quietly

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
halcyon
Patron of SPCR
Posts: 1115
Joined: Wed Mar 26, 2003 3:52 am
Location: EU

Bad news for future AMD cpus cooling - 105W TP

Post by halcyon » Tue Dec 16, 2003 6:58 am

Looks like AMD has some problems with thermal design as well:

http://webpages.charter.net/tates/MO/amd_90nm_heat.jpg
http://www.overclockers.com/tips00501/
http://www.xbitlabs.com/news/cpu/displa ... 84404.html

So, that means 105W Max thermal Power for speeds of 2.6GHz or above at 90nm manufacturing.

Hmm... looks like next year will be the year when more and more silent enthusiast just have to go into liquid cooling or some other technologies (cooligy, anyone).

WannaOC
Posts: 85
Joined: Wed Jun 11, 2003 1:59 pm

Post by WannaOC » Tue Dec 16, 2003 7:12 am

Jeez with that kind of heat it will take a pretty good h20 setup to cool even. Hopefully they will fix it or else my good ol' Barton will be around for a long while. :D

pingu666
Friend of SPCR
Posts: 739
Joined: Sun Aug 11, 2002 3:26 pm
Location: swindon- england :/
Contact:

Post by pingu666 » Tue Dec 16, 2003 7:20 am

waterblock with a 2nd set of channels, use it to warm your soup :D

jojo4u
Posts: 806
Joined: Sat Dec 14, 2002 7:00 am
Location: Germany

Post by jojo4u » Tue Dec 16, 2003 10:22 am

pingu666 wrote:waterblock with a 2nd set of channels, use it to warm your soup :D
no problem... It's called tass-o-matic!
Image

Semm
Posts: 179
Joined: Mon Sep 08, 2003 2:06 am
Location: SoCal, USA

Post by Semm » Tue Dec 16, 2003 10:43 am

Mug-o-matic for the non-German speakers ;) I laughed when I saw that on Innovatek's site.

Y'know, you could use one of these unpleasantly hot processors as a pre-heater for the water heater in your house. We have a solar system that does much the same thing in the house, though with all the sun we get down here it actually takes care of most of the heating. Hmmm...

Or...Start a folding farm of 100W+ processors and never have to pay for heating water again! Though the electricity bill might be a bit unpleasant :D

Hope AMD and Intel can get the 90nm bugs worked out soon, in any case. And I don't mean by redefining what's in spec...

Semm

Vegita
Posts: 64
Joined: Mon Dec 15, 2003 8:02 am
Location: Toronto, Canada

Post by Vegita » Thu Dec 18, 2003 8:24 am

That would be neat if the heat could somehow be captured and used to help power the CPU.

fancontrol
Posts: 291
Joined: Mon Feb 10, 2003 11:19 am

Post by fancontrol » Thu Dec 18, 2003 8:57 am

Vegita wrote:That would be neat if the heat could somehow be captured and used to help power the CPU.
I (think) you'd be sad with what it takes to do that. You may be able to use a peltier to push some electrons around, but I bet you'd have a hard time driving a LED, much less a CPU.

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Thu Dec 18, 2003 9:09 am

It seems like both AMD and Intel is going to face huge problems with heat.
Both seem to be looking on multiple core CPU's, and rotating used core.
That way they can limit the heat produced at one spot in the CPU.

The downside is that you are very close to adding many complete CPU's and only use one at a time. Using multiple CPU's that are slower and generate less heat seems like a much more cost efficient solution.
The problem is that most software is not written to handle many CPU's very well.

I think we will soon see that computer speed increase will slow down to a lot less than double speed every 18 months. The good side of this coin is that the machines we buy today may be usefull a little longer than we are used to.
We have already seen some of this effect. Computer speed from end of 2001 to end of 2003 just managed to double (that is in 24 months instead of 18 ). Predictions that this will continue and slow down even further would mean that we would not even have computers twice as fast as today at the end of 2005.

The only reasonable way to go would be 64 bit programs and computers and start to develope all software so they can benefit from multiple CPU's. Problem is that it costs more to develop such software...

Gooserider
Posts: 587
Joined: Fri Aug 01, 2003 10:45 pm
Location: North Billerica, MA, USA
Contact:

Post by Gooserider » Tue Dec 23, 2003 7:01 pm

silvervarg:
...The problem is that most software is not written to handle many CPU's very well.
...
The only reasonable way to go would be 64 bit programs and computers and start to develope all software so they can benefit from multiple CPU's. Problem is that it costs more to develop such software...
Just to point out, neither multi processor nor 64-bit programming is that terribly difficult, Linux has had both capabilities for years... (Just ask SCO! :lol: ) Of course there are some low performance multi-billion $$$$ software companies that don't have much skill at improving their proprietary x86 code to run on other architectures, but that doesn't seem to stop the Open Source people from doing it....

Gooserider

engseng
Posts: 136
Joined: Mon Sep 01, 2003 10:44 pm
Location: Kuala Lumpur, Malaysia.

Post by engseng » Tue Dec 23, 2003 8:04 pm

I'm looking forward to seeing what methods Mike Chin will use to cool down his future computer silently. Hopefully, this will make a bigger market for silent PC products, once people start noticing how noisy their PC is. Or maybe everyone will start branding their products to be "ULTRA QUIET".

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Tue Dec 23, 2003 8:19 pm

Oh well. VIA, here I come....

mpteach
Posts: 426
Joined: Mon Dec 22, 2003 8:14 pm
Location: CT USA
Contact:

Industry trends

Post by mpteach » Tue Dec 23, 2003 8:26 pm

The computers are dissipating more heat, but were doing more with them.
Back in the day when a cpu dissipated 5W, you couldnt play a serius multiplayer game, watch DVDs and edit movies. Were paying more in heat energy but were getting more.

Does anybody know which has increased more in the last 12 years? Heat production or Proccesing power?

mpteach
Posts: 426
Joined: Mon Dec 22, 2003 8:14 pm
Location: CT USA
Contact:

Post by mpteach » Tue Dec 23, 2003 8:29 pm

The microwave oven was invented after a radar engineer accidentally melted the chocolate bar in his pocket.

I cant wait to see what will happen to computers.

That if global warming doesnt get us first.

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Thu Dec 25, 2003 11:52 am

Gooserider:
Just to point out, neither multi processor nor 64-bit programming is that terribly difficult, Linux has had both capabilities for years... (Just ask SCO! ) Of course there are some low performance multi-billion $$$$ software companies that don't have much skill at improving their proprietary x86 code to run on other architectures, but that doesn't seem to stop the Open Source people from doing it....
Well, 64-bit programming is not difficult at all, but the change is the problem. If you got a 64 bit processor you want 64-bit programs, but all the people with 32 bit processors still need 32 bit programs. On top of this Intel might not choose the same 64 bit instuctions set as AMD did...
Having the operating system support 64 bit and multiple processors are not that hard, even Microsoft has achieved this with okey qualitiy. But this is just the first step. Every single program that needs lots of CPU power needs to be programmed to be able to use multiple processors efficiently, and this is a lot of hard work and more possibilities for bugs. Since it takes more man-hours to develop programs the software becomes more expencive. (Or in the Open Source area this means less programs written with same effort).

Another thing is that the "problem" a program solves does in itself contain parts that are less than ideal for processing on multiple processors. This means that many programs can even in theory not use 100% processing power from many processors.
With the standard architecture used in todays PC's you typically get n*0.9^(n-1) work done with n processors.
This means that with 4 processors more than a full processor is typically wasted (only 2.916 times the speed of a single processor can be accomplished).

There are ways around this by changing to other architectures. I have been programming on a super computer with 65536 processors, and that is not something I would like to use for my every day programs.

MikeC
Site Admin
Posts: 12285
Joined: Sun Aug 11, 2002 3:26 pm
Location: Vancouver, BC, Canada
Contact:

Re: Industry trends

Post by MikeC » Thu Dec 25, 2003 2:21 pm

mpteach wrote:Does anybody know which has increased more in the last 12 years? Heat production or Proccesing power?
Computer Noise in the 21st Century, article in SPCR going back over a year, gives you an idea.

Gooserider
Posts: 587
Joined: Fri Aug 01, 2003 10:45 pm
Location: North Billerica, MA, USA
Contact:

Post by Gooserider » Sat Dec 27, 2003 7:44 pm

silvervarg:
Well, 64-bit programming is not difficult at all, but the change is the problem. If you got a 64 bit processor you want 64-bit programs, but all the people with 32 bit processors still need 32 bit programs. On top of this Intel might not choose the same 64 bit instuctions set as AMD did...
Well, I'm not a programmer, but I know that Linux has ALREADY been ported to both the AMD and Intel 64 bit chips (along with everything else from wristwatches to mainframes), and so has gcc. Supposedly most Linux S/W only needs a recompile with appropriate options to make it 64 bit ready. (especially the stuff that can already run 64bit on SPARC)

In terms of multi-processor, I agree there are some inefficiencies there, but supposedly Linux has done some extra stuff to keep processor hopping to a minimum, especially in the new 2.6 kernel. The claim is that by keeping any given thread on the same processor most of the time, but distributing the threads, one improves the efficency considerably since it reduces cache swaps, etc.

Gooserider

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Sun Dec 28, 2003 2:36 pm

64 bit
I agree that most of the time it only takes a recompile to make a program 64 bit for the appropriate processor. Assuming you want to distribute binaries (that is still the most common way) you might need to distribute in 3 versions (32 bit, 64 bit AMD, 64 bit Intel).
To make sure the programs work fine and you are a serious software vendor you defenatly want to test on all 3 versions. This can cause up to 3 times as high cost for testing.
So from a technical point of view it is not a problem. But testing and installation programs becomes more of a mess, so from a marketing and cost perspective it is a problem.

Assuming you distribute the source code as open souce you don't run into the same kind of problems, but you will at least get a lot more support questions about compiling and installation problems.

Multi processors
This is a bit more complex to explain, but try to see it from the eyes of a programmer.
If you want to run lots of different programs at the same time that each one take a fairly good amount of CPU power you can let the OS distribute different jobs to different processors. I don't know any of the details of the new Linux kernel, but from what you wrote it seems they have improved this part.
Unfortunately this is normally not the case when you load the CPU a lot. Normally you have a single program that takes most of the CPU. If the program is written for a single processor things get a lot simpler. To support multiple processors efficiently you need to be able to split each complex tasks in multiple pieces and syncronize the work and put the results back together again. To be really efficient you should also split differently depending on the number of available processors.
Now lets assume a customer reports an error that happens once in a while on his 8 processor machine but it does not happen on his 2 processor machine. Imagine the debugging nightmare that you have ahead of you...

This might give you a clue on why it will become a lot more costly to write, debug and support programs that should be able to make good use of multiple processors.

Gooserider
Posts: 587
Joined: Fri Aug 01, 2003 10:45 pm
Location: North Billerica, MA, USA
Contact:

Post by Gooserider » Sun Dec 28, 2003 8:43 pm

Well I just got some advice from my girlfriend who IS a programmer, (Senior S/W engineer grade, for a major hardware/software vendor with a largely depressing color scheme, and well known TLA name. ;) She's been doing Java since JDK 1.0.2) She routinely does multithreaded S/W to run on multi processor & multi-machine systems.

She largely agrees with your evaluations, in terms of that it is not easy to make it both fast and efficient, but it isn't that hard to do multi processor or 64 bit.

She does say that if a program is written with more threads than processors, and the compiler / O/S is intelligent about thread allocation, then it might be possible to write a program where it doesn't matter how many processors are available. (she says if someone were doing it, she might consider an invitation to work on such a project, since it sounds like her notion of fun...:cool: )

Gooserider

sbabb
*Lifetime Patron*
Posts: 327
Joined: Tue Aug 19, 2003 10:04 am
Location: New Hampshire, USA

Post by sbabb » Mon Dec 29, 2003 1:14 am

64-bit is easy. If you could deal with the migration from 8 to 16 to 32 bits, then 64 bits should be simple.

Multiprocessing isn't too difficult until you get to more than 4 to 8 CPUs, and even the <8 CPU machines are usually not done particularly well. The problem is that everyone focuses on the CPUs and all but ignores the rest of the infrastructure in the computer.

For example, a large TLA hardware/software maker rolled out a High Performance Computing (big number crunching) version of one of their boxes that had 1/2 of the CPUs turned off to improve performance! The problem was cache contention with the cache shared between multiple CPU cores. Turning off 1/2 the cores cut the cache contention so the machine actually ran faster in HPC applications. I remember about 20 years ago when a vendor of an embedded computer system built a dual-processor machine that performed dramatically worse when you turned on the second CPU.

Companies who do multiprocessing well design their CPU, OS, system interconnect, memory subsystem, and I/O subsystem as in integrated product. The CPU is designed with the operating system in mind, and vice-versa. By designing holistically, near-linear performance scaling can be achieved with large (32 to 64+) numbers of processors in a single box. Things like cold cache migration and interconnect bottlenecks are designed out from the beginning, instead of being kludged together later once the machine doesn't perform as expected.

To nudge this back toward the relevant topic, I'll say that I've never seen a large multiprocessor box that didn't burn up a fair bit of power just generating fan noise.

silverarg: Intel definitely did not choose the same 64-bit instruction set that AMD chose. AMD chose a superset of Intel's IA-32 (x86) instruction set while Intel went off in a different (and incompatible) direction with the Itanium. So the AMD64 CPU runs x86 code native while the Itanic has to slowly run it in emulation. Good move by AMD; Bad move by Intel.

mpteach
Posts: 426
Joined: Mon Dec 22, 2003 8:14 pm
Location: CT USA
Contact:

Post by mpteach » Mon Dec 29, 2003 5:02 am

Well I just got some advice from my girlfriend who IS a programmer
Is she older and balding i think i know that girl :wink:

J/K

IF you ever break up and shes in the CT area give her my email.

I can multithread if she can catch my cycle.

mpteach
Posts: 426
Joined: Mon Dec 22, 2003 8:14 pm
Location: CT USA
Contact:

Post by mpteach » Mon Dec 29, 2003 5:05 am

Is itanium faster? if it is then it would be run on servers and high perfomance machines and athalons on average machines. We could have the BETA VS VHS Next Generation.

silvervarg
Posts: 1283
Joined: Wed Sep 03, 2003 1:35 am
Location: Sweden, Linkoping

Post by silvervarg » Mon Dec 29, 2003 3:44 pm

Gooserider:
She largely agrees with your evaluations, in terms of that it is not easy to make it both fast and efficient, but it isn't that hard to do multi processor or 64 bit.

She does say that if a program is written with more threads than processors, and the compiler / O/S is intelligent about thread allocation, then it might be possible to write a program where it doesn't matter how many processors are available.
Seems like a smart girl. If she ever wants to work in Sweden drop me an email.

The trick here was the disclaimer "it is not easy to make it both fast and efficient". Then she suggests an easy way to enable support for multi processors, but this is not a way that is very efficient most of the time.
I admit that we sometimes use this approach as well since it is easy and generic.

I tried to avoid going into details in this matter, but perhaps this it the time to dive deeper. I think a simple example is called for.
Lets assume I write a program that splitts a problem into 100 threads, let them execute and then put the answer togerther.
The benefit is that I can make use of up to 100 processors.
The time taken to execute is:
1. Splitting part (use 1 processor only).
2. Threaded part (use up to 100 processors).
3. Collecting part (use 1 processor only).

Lets put some time numbers on these to make sense (greatly magnified so we don't have to take fractions and stuff into account).
1. Split 1 CPU second.
2. Work 10 CPU seconds.
3. Collect 2 CPU seconds.

With 100 processors with 100% efficiency in hardware the total time is:
1+(10/100)+2=3.1 seconds.
With 2 processors: 1+(10/2)+2=8 seconds.
With 1 processor: 1+(10/1)+2=13 seconds.

The efficient usage (shown as CPU load in good OS):
For 100 processors: 33.2%
For 2 processors: 81.3%
For 1 processor: 100%

If the same program was written for a single CPU only the work needed to be done would be zero for split and collect, so the total execution time would be 10seconds. So by enabling support for 100 processors we have degraded single CPU performance by 30%.
A more complex program that didn't split the load if only a single processor was available would therefor be 30% faster.

I think you can see my point here. It is a complex problem, and there is no easy and good solutions that can be used all the time. There are a few rules of thumb, but they and not good enough to be used all the time.

Note: A system with 100 processors based on our normal computer architecture will not work at all, for the reasons mentioned by Sbabb. 100 processors was just picked as an arbitrary number to show my point more clearly in the above example.

DryFire
Posts: 1076
Joined: Sun May 25, 2003 8:29 am
Location: USA

Post by DryFire » Mon Dec 29, 2003 9:47 pm

The law of diminishing returns is kicking in.

i think we may start to see not multiple general pourpose processors but multiple specific ones. Like the GPU/VPU whatever you want to call it.

that way different processors handle different instructions.

or maybe migrate away fromsilicon to someting else, industrial dimaonds ar looking good.

powergyoza
Patron of SPCR
Posts: 543
Joined: Mon Oct 21, 2002 1:01 am
Location: Vancouver, BC, Canada
Contact:

Post by powergyoza » Tue Dec 30, 2003 1:48 am

DryFire wrote:i think we may start to see not multiple general pourpose processors but multiple specific ones.
Ah, it just goes to show that the Amiga was 10+ years ahead of its time. We may still see it yet, and it will be called the PC.

mpteach
Posts: 426
Joined: Mon Dec 22, 2003 8:14 pm
Location: CT USA
Contact:

Post by mpteach » Tue Dec 30, 2003 10:54 am

diamonds are cool, they have the highst themal conductivity, yet they are electricaly isulating. Also they are hard and would make a durable substrate for future cpus.

HokieESM
Posts: 36
Joined: Fri Feb 14, 2003 7:08 am
Location: Blacksburg, VA

Post by HokieESM » Tue Dec 30, 2003 11:23 am

One note on multiple CPUs: this is completely assuming that all of the programs you want to run are fully parallelizable and you have fast enough "cross talk" between the CPUs to ensure that information that is being processed in one CPU is ready for the other CPU (if/when it needs it). I'm definitely not a hardware designer, but I have run into this problem with my research (I do computational mechanics)--some problems are inherently unparallelizable... in fact, most mechanics based problems (full-field solutions) aren't parallelizable to a large extent (believe me, VT's System X--the 1100 node G5 cluster--doesn't help me all that much, other than letting me do LOTS of different models at the same time).

There will ALWAYS be a market for a faster single-processor... so lets hope that Intel and/or AMD can solve the current leakage problem on the 90nm process and get the thermal power requirements down. :)

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Tue Dec 30, 2003 6:03 pm

I'd rather have one ridiculously fast processor than two respectably fast ones. SMP is all right, but unless you're running multi-threaded applications designed to take advantage of them, I don't usually see the point.

GamingGod
Posts: 2057
Joined: Fri Oct 04, 2002 9:52 pm
Location: United States, Mobile, AL

Post by GamingGod » Tue Dec 30, 2003 6:49 pm

What about a main fast processor, and a secondary not as fast processor that isnt used as often but at least its working while the other is busy?

Post Reply