Have you ever wondered why CPU gets hot in the first place ?
Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee
Have you ever wondered why CPU gets hot in the first place ?
Taken from this article:
http://www.cs.usu.edu/~degaris/artilectwar.html
"Today's computers generate heat because we are using thermodynamically irreversible processes (i.e. we can't reverse the effects at a later time). We generate heat every time we destroy information, i.e. wipe out bits (resetting to 0s). Landauer thought that this was inevitable, because when he looked at how computers worked, he saw that they were full of "AND gates", and the like. An AND gate is an elementary piece of electronic circuitry which has two input lines (A and B), and one output line. If both input lines are set at a high voltage (i.e. have a "1" on their line) then the output line will become a "1", i.e. if both input line A AND input line B are set at "1", then the output line becomes a "1". In any other case (i.e. A=0, B=0; A=0, B=1; A=1, B=0) the output line becomes a "0".
Since there are two inputs lines containing a total of 2 bits of information in an AND gate, and only one output line containing 1 bit of information, of necessity, the AND gate destroys information. Every time two bits go through, only one bit comes out. The AND gate is irreversible, i.e. you cannot always deduce from the output what the input was. For example, if the output was a 1 then you know the inputs were both 1. But if the output was a 0, you don't know if the inputs were, (0,0), or (0,1) or (1,0). For a gate to be reversible (i.e. you can deduce what the inputs were from its outputs and vice versa), common sense says that you have to have the same number of input lines as output lines.
People began to dream up reversible gates with an equal number of input and output lines. One famous one was called the Fredkin gate, which had 3 inputs and 3 outputs. It was reversible, no bits of information were destroyed. It was also "computationally universal", i.e. by feeding the outputs of Fredkin gates to the inputs of other Fredkin gates, whole circuits of these gates could be built up that performed any of the functions that computers need to perform.
Since the individual gates of the computer were reversible, the computer itself could be made reversible. In other words, one could begin with the initial input bits to be fed into the left hand side of the computer, and these would be processed by the Fredkin gates in the computer design, resulting in the answer coming out of the gates at the right hand side. You can make a copy of the answer (which might generate a little bit of heat) and then send the answer back into the computer from right to left. Since all the gates of the computer are reversible, you will end up with what you started with at the left-hand side. You have performed a reversible computation. No bits have been lost and no heat has been generated. Nevertheless, you have the answer you wanted, because you made a copy of it half way through. Reversible computing may take twice as long as traditional computing, because you have to send the result backwards through the same circuit (or an identical copy), but at least there's no heat generated.
So what! Why am I spending so much time and energy explaining such things? Because I believe that the theoretical discovery of reversible, heatless computing is the single greatest scientific discovery of the twentieth century.
Since this is such a major statement and will probably be treated with considerable skepticism by many people, particularly my research colleagues let me now try to justify it.
A few years ago, some phys-comp theorists were wondering, "If Moore's Law extends right down to the molecular scale, how hot would molecular scale circuits become if one continues to employ conventional irreversible, bit destroying, information processing?" The answer was shocking. Not only would such highly dense circuits melt with the heat, they would explode, they would be so hot. It became clear that molecular scale circuits would have to abandon the traditional irreversible style of computing and start using the new reversible style.
Only very recently have researchers started thinking seriously about reversible computer designs. The laptop and palmtop computer industries are very interested in reversible computing. Their problem is the length of battery life. If their computers could use electronic circuits that were more reversible, the circuits would consume less battery energy, because they would generate less wasteful heat. Hence the battery would drain more slowly and have a longer functional life.
So, it is inevitable that reversible computing has to happen. As Moore's Law continues to bite, pressure will increase on computer designers to use the reversible paradigm. It is only a question of time.
But, if we start taking the concept of heatless computing seriously, we can begin to have revolutionary thoughts. For example, why are today's electronic circuits two-dimensional? Why do we talk of 2D "chips" (i.e. slices) of silicon, rather than 3D "blocks"? Well, because of heat. If we made 3D blocks of silicon with today's level of density of electronic components, then there would be so much heat, the blocks would melt. Also, how would we build them and debug them once they were built? We do not have the techniques yet to do such things. We don't even bother trying to build 3D circuits because we know it would be a waste of time, due to the heat dissipation problem.
But, with reversible heatless circuits, we have the luxury to build large 3D circuitry, with in principle, no limit to size. We could make circuits the size of a cubic centimeter, or a cubic meter, or the size of a room, or a house, or a building, or a city, or even a large asteroid, many kilometers across. In theory we could make computers the size of moons or planets (but the gravitational effects might prove to be problematic)."
http://www.cs.usu.edu/~degaris/artilectwar.html
"Today's computers generate heat because we are using thermodynamically irreversible processes (i.e. we can't reverse the effects at a later time). We generate heat every time we destroy information, i.e. wipe out bits (resetting to 0s). Landauer thought that this was inevitable, because when he looked at how computers worked, he saw that they were full of "AND gates", and the like. An AND gate is an elementary piece of electronic circuitry which has two input lines (A and B), and one output line. If both input lines are set at a high voltage (i.e. have a "1" on their line) then the output line will become a "1", i.e. if both input line A AND input line B are set at "1", then the output line becomes a "1". In any other case (i.e. A=0, B=0; A=0, B=1; A=1, B=0) the output line becomes a "0".
Since there are two inputs lines containing a total of 2 bits of information in an AND gate, and only one output line containing 1 bit of information, of necessity, the AND gate destroys information. Every time two bits go through, only one bit comes out. The AND gate is irreversible, i.e. you cannot always deduce from the output what the input was. For example, if the output was a 1 then you know the inputs were both 1. But if the output was a 0, you don't know if the inputs were, (0,0), or (0,1) or (1,0). For a gate to be reversible (i.e. you can deduce what the inputs were from its outputs and vice versa), common sense says that you have to have the same number of input lines as output lines.
People began to dream up reversible gates with an equal number of input and output lines. One famous one was called the Fredkin gate, which had 3 inputs and 3 outputs. It was reversible, no bits of information were destroyed. It was also "computationally universal", i.e. by feeding the outputs of Fredkin gates to the inputs of other Fredkin gates, whole circuits of these gates could be built up that performed any of the functions that computers need to perform.
Since the individual gates of the computer were reversible, the computer itself could be made reversible. In other words, one could begin with the initial input bits to be fed into the left hand side of the computer, and these would be processed by the Fredkin gates in the computer design, resulting in the answer coming out of the gates at the right hand side. You can make a copy of the answer (which might generate a little bit of heat) and then send the answer back into the computer from right to left. Since all the gates of the computer are reversible, you will end up with what you started with at the left-hand side. You have performed a reversible computation. No bits have been lost and no heat has been generated. Nevertheless, you have the answer you wanted, because you made a copy of it half way through. Reversible computing may take twice as long as traditional computing, because you have to send the result backwards through the same circuit (or an identical copy), but at least there's no heat generated.
So what! Why am I spending so much time and energy explaining such things? Because I believe that the theoretical discovery of reversible, heatless computing is the single greatest scientific discovery of the twentieth century.
Since this is such a major statement and will probably be treated with considerable skepticism by many people, particularly my research colleagues let me now try to justify it.
A few years ago, some phys-comp theorists were wondering, "If Moore's Law extends right down to the molecular scale, how hot would molecular scale circuits become if one continues to employ conventional irreversible, bit destroying, information processing?" The answer was shocking. Not only would such highly dense circuits melt with the heat, they would explode, they would be so hot. It became clear that molecular scale circuits would have to abandon the traditional irreversible style of computing and start using the new reversible style.
Only very recently have researchers started thinking seriously about reversible computer designs. The laptop and palmtop computer industries are very interested in reversible computing. Their problem is the length of battery life. If their computers could use electronic circuits that were more reversible, the circuits would consume less battery energy, because they would generate less wasteful heat. Hence the battery would drain more slowly and have a longer functional life.
So, it is inevitable that reversible computing has to happen. As Moore's Law continues to bite, pressure will increase on computer designers to use the reversible paradigm. It is only a question of time.
But, if we start taking the concept of heatless computing seriously, we can begin to have revolutionary thoughts. For example, why are today's electronic circuits two-dimensional? Why do we talk of 2D "chips" (i.e. slices) of silicon, rather than 3D "blocks"? Well, because of heat. If we made 3D blocks of silicon with today's level of density of electronic components, then there would be so much heat, the blocks would melt. Also, how would we build them and debug them once they were built? We do not have the techniques yet to do such things. We don't even bother trying to build 3D circuits because we know it would be a waste of time, due to the heat dissipation problem.
But, with reversible heatless circuits, we have the luxury to build large 3D circuitry, with in principle, no limit to size. We could make circuits the size of a cubic centimeter, or a cubic meter, or the size of a room, or a house, or a building, or a city, or even a large asteroid, many kilometers across. In theory we could make computers the size of moons or planets (but the gravitational effects might prove to be problematic)."
-
- Posts: 419
- Joined: Sun Sep 19, 2004 1:05 pm
- Location: Palo Alto, CA
I wouldn't be so sure....
The site sourced is a CS professor at USU. The brief googling I did on Fredkin gates suggested that it was a mathmatical construct with a possible implementation in/for quantum computing.
But I'm curious to know what the logic output of a Fredkin gate looks like and how it would be implemented in terms of N-doped and P-doped silicon (or the equivalent thereof). One of the things I was good at back in my University days was utilizing and optimizing logic gates.
The site sourced is a CS professor at USU. The brief googling I did on Fredkin gates suggested that it was a mathmatical construct with a possible implementation in/for quantum computing.
But I'm curious to know what the logic output of a Fredkin gate looks like and how it would be implemented in terms of N-doped and P-doped silicon (or the equivalent thereof). One of the things I was good at back in my University days was utilizing and optimizing logic gates.
have the same sort of question as sthayashi (with the difference being that i don't know what i'm talking about); i'm aware that with conventional gates, there's a (very small) constant that represents the minimum energy loss per operation that is physically possible. but, who cares? as the actual circuits get smaller, you're still going to be forcing more and more (total) power through smaller and smaller wires, even though individual gates may use slightly less power with each revision. common sense seems to suggest that resistive losses are only going to make up more and more of the heat dissipated, and this won't do anything for that.
or, you can let a marble roll back and forth in a bowl, and yeah, not much of its energy is being dissipated as heat, but you would be hard pressed to get anything useful out of it.
edit: that article/book makes me question whether the guy has any idea what he's talking about with respect to this thread, but speaking as a godless philosopher-biologist, that is a really neat read
or, you can let a marble roll back and forth in a bowl, and yeah, not much of its energy is being dissipated as heat, but you would be hard pressed to get anything useful out of it.
edit: that article/book makes me question whether the guy has any idea what he's talking about with respect to this thread, but speaking as a godless philosopher-biologist, that is a really neat read
Well,
I did some reading about this subject in the past, and the genral consensus today is that, in percents, about 95% of the heat a CPU produces, comes from irreversible computations. Only 5% is from conductivity issues (resistance in the circuitry).
So, while reversible computations might not eliminate heat completely, it will allow a drastic improvement which will allow 3D CPUs to be made.
As long as AMD and Intel will be able to make faster 2D chips, they will probably prefer to keep things as they are today, because using reversible computations seem to have a huge negative impact on computation speed.
I did some reading about this subject in the past, and the genral consensus today is that, in percents, about 95% of the heat a CPU produces, comes from irreversible computations. Only 5% is from conductivity issues (resistance in the circuitry).
So, while reversible computations might not eliminate heat completely, it will allow a drastic improvement which will allow 3D CPUs to be made.
As long as AMD and Intel will be able to make faster 2D chips, they will probably prefer to keep things as they are today, because using reversible computations seem to have a huge negative impact on computation speed.
See, my interest and focus has always been on implementation. Although, I'm several years out from my University days and didn't quite take the right classes to learn everything right down to the silicon level, this higher level claim (e.g. irreversible computations) makes no sense to me at the implementation level. It sounds oversimplified.jones_r wrote:Well, I did some reading about this subject in the past, and the genral consensus today is that, in percents, about 95% of the heat a CPU produces, comes from irreversible computations. Only 5% is from conductivity issues (resistance in the circuitry).
That's why I'm curious to see what a 'reversible logic gate' looks like, both in terms of function and implementation.
Anyone who's discussed genetic engineering and artificial intelligence with me in the forums knows what I think of Dr. Hugo de Garis.
Fredkin gate. Toffili appears to be another.
Do the two bits just keep going after passing through this "reversible AND gate"?
They have to go somewhere though, right?
more info.
Fredkin gate. Toffili appears to be another.
Do the two bits just keep going after passing through this "reversible AND gate"?
They have to go somewhere though, right?
more info.
Last edited by Trip on Sun Nov 14, 2004 3:35 pm, edited 1 time in total.
hmm, ok then. but i'd think the problems of physically implementing 3D cpus would be even harder to overcome (right now at least) than the individual gates, or even a 2D cpu made of these reversible gates. for an artilect it's no big deal since the cpu is both the means and the end, but if we used anything at all like a contemporary CPU (say, a cube as thick as our cores presently are wide) in a PC how could we possibly go about getting enough power and data in and out of it?
also, any idea exactly how much slower this would be? that is, if we'd need a CPU 8 times larger (either denser, or 8 layers thick, whatever) to do the work of a current CPU, that 5% of losses due to resistance effectively turns into 40%.
also, any idea exactly how much slower this would be? that is, if we'd need a CPU 8 times larger (either denser, or 8 layers thick, whatever) to do the work of a current CPU, that 5% of losses due to resistance effectively turns into 40%.
-
- Posts: 75
- Joined: Mon May 05, 2003 1:02 pm
This is getting difficult for me to address, simply due to my lack of knowledge in the fields raised. Linear algebra/mathmatics makes my eyes glaze over and in my mind, obfuscates the answer I'm trying to find: How does irreversible computation waste power? Power consumption, for me, has always been defined at the physical level, not the theoretical level.
Trip, I'd like to think I've discussed genetic engineering with you, but I don't recall what your feelings were regarding Dr. de Garis.
Trip, I'd like to think I've discussed genetic engineering with you, but I don't recall what your feelings were regarding Dr. de Garis.
irreversible computation must get rid of at least one bit, correct? That bit is released as waste heat. That's at least how I understand it, only from what I've read in this thread. I don't remember reading that in a text book.
On Dr. de Garis, I've never mentioned him specifically but
On Dr. de Garis, I've never mentioned him specifically but
for me this and genetic engineering are already and have been major issues.article wrote:In short time, millions of people start debating the Artilect Issue. The general public will wonder just how much intelligence the artificial brain industry should be allowed to give artificially intelligent products? Should any constraints be placed on the industry at all? If progress should be stopped after reaching a certain intelligence level, can it be stopped? Gradually but steadily, the debate will heat up to such a point that it will become the dominant issue of the age. It will influence and define the 21st century and beyond.
Why, if he realises the dangers, does he press on??Hugo de Garis wrote:This remarkable speed should allow me and my brain builder team to assemble tens of thousands, and later millions, of such circuits into artificial brain architectures. We are becoming "brain architects." At this time, it is not clear that this approach will result in the creation of functional, interesting, and truly useful artificial brains.
-
- Posts: 419
- Joined: Sun Sep 19, 2004 1:05 pm
- Location: Palo Alto, CA
Sorry to continue a derailment, but I feel strongly on this issue. Scientific progress is not something that can be stopped; whether he as an individual stops or not. All science will continue to it's logical conclusion, even if this means the destruction of all mankind.Trip wrote:Why, if he realises the dangers, does he press on??
-
- Posts: 404
- Joined: Sun Jul 04, 2004 9:18 pm
- Location: Illinois, USA
I'm going to have to call BS on the idea that a "reversible" computer wouldn't produce any waste heat. The theory in the first post of this thread seems to have taken the thermodynamic statement that a reversible process wastes no energy and applied it to information.
That statement only applies to state variables - like enthalpy and entropy - not to information.
Here's another argument: If no waste heat was generated by the CPU, the requirement for power input would be eliminated. Are you next going to claim that these CPUs don't require a power source? If so, silencing computers just got ANOTHER step easier .
In a processor, in order to read a 1 or 0, a small current must always exist - granted, these are often very small, but there is always some glitch impulse. The glitch impulse current will always generate heat in any non-superconducting material. I don't have the education to know whether or not this is the primary source of heat dissipation in real processors today, but it seems like an obvious candidate.
That statement only applies to state variables - like enthalpy and entropy - not to information.
Here's another argument: If no waste heat was generated by the CPU, the requirement for power input would be eliminated. Are you next going to claim that these CPUs don't require a power source? If so, silencing computers just got ANOTHER step easier .
In a processor, in order to read a 1 or 0, a small current must always exist - granted, these are often very small, but there is always some glitch impulse. The glitch impulse current will always generate heat in any non-superconducting material. I don't have the education to know whether or not this is the primary source of heat dissipation in real processors today, but it seems like an obvious candidate.
-
- Posts: 75
- Joined: Mon May 05, 2003 1:02 pm