After our two main test beds for heatsink testing developed intermittant defects, we’ve rebuilt our test bed for the fourth (or is it the fifth) time so we can continue to bring you quality heatsink reviews. At the same time, we’ve brought our article on testing heatsinks up to date — a long overdue update that brings it out of the era of Athlons and P-IIIs into the present day where Socket 775 now rules the roost. We’ve also tossed in some gems about VRMs and testing CPU power, as well as a quick re-test of some old favorites to kick things off.
Heatsink / Fans (HSF) or coolers for CPUs are bread and butter at most hardware
sites. The market for aftermarket heatsinks has exploded since the introduction
of the AMD Athlon processor — in its day the hottest processor ever sold
and the first to require more than 50W. Fast forward to 2007, and the number
of HSF on the market must have reached the thousands, and some of the latest
processors now consume more than double the power of the original Athlon. Heatsinks
are big business: Look around on the web on any given day, and you see a handful
of new reviews of yet more HSF. So what can we contribute?
The CPU cooler is one of the five major sources of noise in a typical PC —
the others being the power supply, system fans, video cooler, and the hard drive.
Unlike most hardware sites, we are not interested in reviewing all of them.
No, we are interested only in coolers that do their job quietly, or can be made
to do so quietly. We select coolers for review on the basis of these questions:
- Is it considered so good that we just have to see how it does with a quiet fan?
- Does it look like it can be run quietly (even though it is not designed or marketed as quiet)?
- Does someone we know and have reason to trust say it’s quiet and good?
- Is it marketed or designed as a quiet heatsink? (Most heatsinks these days
- Can a sample be readily obtained?
Testing Heatsinks vs. Testing Fans
SPCR has been testing heatsinks for almost five years. Our test bed has evolved
through several generations as times have changed as CPU sockets and mounting
systems have come and gone. But, no matter what the underlying technology, SPCR’s
testing has always had a slightly different emphasis than the testing done by
other web sites.
CPU heatsinks are usually packaged and marketed with a fan, which explains
the rise of the moniker HSF, heatsink fan. Most hardware web sites test the
HSF as an integrated unit. For SPCR, this is not an ideal way to do things,
mainly because almost all fans supplied with HS are too loud. That’s not to
say there aren’t exceptions — we can think of one or two models that are
quiet in stock form — but these are not yet the norm. At the very least,
most HSF units require a way of reducing fan speed to be made quiet, and all
too few include a means of doing this out of the box.
Because of this, we don’t ask the questions that others do:
How much cooling achieved with this HSF?
Instead, we ask,
What is the cooling power of this heatsink with this quiet fan whose characteristics
are well known?
By asking this question, we put all the heatsinks on the same playing field
— no screaming 100 CFM fans. All have only the aid of the same low noise,
low airflow fan. We remove the cooling advantage of powerful noisy fans, which
is a cheap, brute-force method of cooling. The heatsink, then, is the only variable.
The benefit of this approach is that it guarantees that all heatsinks are tested
under the same acoustic conditions. What we want to know is how well a heatsink
performs at a given (quiet) noise level, and using the same fan for all our
tests gives us a way of reproducing the same noise level every time we do a
That being the case, we are well aware that most people are not inclined to
purchase both a heatsink and a fan, so we actually run two tests
for every review: One with our reference fan, and one with the heatsink in stock
form. When that exceptional heatsink that ships with a quiet fan appears, don’t
worry, you’ll know about it. Stock fans are profiled according to our standard fan testing methodology, which uses a similar noise-centric approach.
We consider the heatsink and mounting mechanism together as a unit. A heatsink’s intrinsic cooling power is determined mainly by:
- its radiating surface area
- the heat transfer coefficient of its materials
- the spacing and number of fins
- its geometry
- the smoothness and flatness of the CPU contact surface
- overall mass
- ease and efficacy of the mounting mechanism
The mounting mechanism is mentioned because it maintains the all-important contact between CPU and heatsink. The amount of pressure brought to bear on the interface also affects cooling. It is also the only real interface between HS and user.
We may say we use a HS, but it’s not the same way that we use a car,
for example. We interact constantly with a car while using it. If there is any
aspect of user interaction with a HS, it really happens only when the HS is
installed or uninstalled. If this design aspect is poor and results in the user
having difficulty with installation, or failing to mount the HS correctly, then
poor cooling performance leading to shutdown or downright failure of the CPU
can result. Some mounting mechanisms are poor, both difficult to install and
lacking in precision or security; others are integrated wonderfully into the
heatsink and easy to use.
Our point of view is that the mounting mechanism is a critical part of the
HS design. Lack of attention to this detail suggests a design that is not completely
Our reference fan is either a Nexus 80, 92, or 120 fan, as required by the
heatsink under review. These are some of the quietest fans on the market,
but that’s not why they were chosen. There are other fans as quiet as or quieter
than the ones from Nexus, but there’s one reason to prefer the Nexus fans over
the others: Over the past two years, these fans have been used countless times
in dozens of SPCR reviews. They are reference fans not because we chose
to use them, but because we did use them.
Some readers may feel that the choice of reference fan is incorrect or that
more than one fan should be used. In addition to the reason mentioned above,
there are several compelling, simple reasons for us to use the Nexus fans:
- They are well known by many and not impossible to get in most places in
- They are quiet compared to most fans at 12V, hard to hear at 9V and virtually
silent below this level.
- They represent the low airflow that is often necessary for quiet computing.
- We happen to have a small bunch of them on hand.
- Using more than one fan means double the time spent testing. Too much time
is already spent at this task. Besides, the whole point is to use a single
Each of the reference fans has been carefully profiled below using our
standard fan testing methodology, and more detailed results for the 80mm
and 120mm fans can
be found in recent SPCR fan roundups. Results for the 92mm model can be expected
to follow within the next month or two.
Noise and Airflow Characteristics: Nexus 80
Noise and Airflow Characteristics: Nexus 92
Noise and Airflow Characteristics: Nexus 120
Heat Source: CPU on Motherboard
Our standard test platform uses an Intel Pentium D 950 processor. While this
is far from the cutting edge in terms of performance, it is an ideal choice
for a heatsink test bed because it puts out considerably more heat that Intel’s
more recent Core 2 Duo chips, ensuring that the top heatsinks are given something
sufficiently difficult to cool.
The Pentium D 950 is rated for 3.4 GHz clock speed, 1.2V core voltage, and
130W TDP. However, detailed testing of our test bench showed that the processor
(along with some small losses in the VRMs on the motherboard) did not consume
more than 78W under full load. This 78W figure will be used when calculating
The test bed and a few test tools.
Key Components in Heatsink Test Platform:
Pentium D 950 Presler core. Under load, it measures 78W minus the
efficiency losses in the VRMs.
P5LD2-VM motherboard. A basic microATX board with integrated graphics
and plenty of room around the CPU socket.
- Hitachi Deskstar 7K80
80GB SATA hard drive.
GB stick of Corsair XMS2 DDR2 memory.
- FSP Zen 300W
fanless power supply.
- Arctic Silver
Lumière: Special fast-curing thermal interface material, designed
specifically for test labs.
Power Angel for measuring AC power at the wall to ensure that the
heat output remains consistent.
- High accuracy Extech
MM560 True RMS multimeter & two other multimeters
of good precision.
- High precision LTS
25-NP Current Sensor (to read the AUX12V current), courtesy
of Intel. Used
in combination with the multimeters to measure the amount of power consumed
by the CPU
- Custom-built, four-channel variable-speed fan controller, used to
regulate the fan speed during the test.
- Bruel & Kjaer (B&K) model 2203 Sound Level Meter. Used to
accurately measure noise down to 20 dBA and below.
- Various other tools for testing fans, as documented in our
standard fan testing methodology.
4.31, used to monitor the on-chip thermal sensor. This sensor is not
calibrated, so results are not universally applicable; however,
P6, used to stress the CPU heavily, generating more heat that most
realistic loads. Two instances are used to ensure that both cores are stressed.
2.01, used to monitor the throttling feature of the CPU to determine
when overheating occurs.
The actual test procedure is quite simple, and can be summed up in a step by
step algorithm. The testing takes place twice, once for the stock fan, and again
using the reference fan. If there is no stock fan (as is the case for Thermalright’s
heatsinks), or if the heatsink does not allow for an easy fan swap (such as
Zalman’s flower heatsinks), one of the tests may be dropped. Common sense rules
the roost here; if some part of our methodology that doesn’t apply we won’t
attempt it, but, generally speaking, this is what happens during a test:
- Ambient conditions in the lab are measured. Typically, SPCR’s sound lab
measures 18~19 dBA and 20~21°C. If the ambient conditions stray to far
from these norms, testing is halted until they return to normal levels.
- The stock fan is removed and profiled according to our
standard fan testing methodology, with one important difference: Fan noise
(not airflow or RPM) is measured with the fan mounted one the heatsink to
better reflect the noise generated by the HSF as a unit, rather than the fan
itself. Noise testing is done without the system powered on, leaving
the HSF as the only source of noise during testing.
- The heatsink is mounted on the test bed using whatever mounting system is
available for Intel’s Socket 775. About 90% of aftermarket heatsinks have
so-called "universal" mounting systems that work with both Intel
and AMD-based systems. If we come across an AMD-only heatsink that we just
have to review, we may add an AMD-based test bed in the future… or we may
attempt to use an adapter to mount it on our existing platform.
- The fan speed is set to 12V.
- Two instances of CPUBurn (P6) are started and thermal testing begins. The
test length is nominally 20 minutes, but no temperatures are recorded until
the CPU temperature has been unchanged for at least 10 minutes, verified using
SpeedFan’s thermal graph (the widest horizontal scale shows roughly 13 minutes
of past readings). The P5LD2-VM motherboard exhibits approximately 2~3°C
of "jitter" in the thermal readings, so for the purposes of our
testing, the "peak" of the jitter is taken as the thermal result.
- Steps 4 and 5 are repeated with the fan set to 9V, 7V, and 5V. Lower voltage
tests are halted if the CPU begins to throttle, and the heatsink is declared
unfit for use at these lower levels of airflow.
- Thermal rise (?T) is calculated for each voltage level. The formula is Thermal
Rise = CPU Temperature – Ambient Temperature.
- Thermal resistance is calculated for each voltage level. The formula is
Thermal Resistance = Thermal Rise ÷ 78W (CPU Heat). Thermal
resistance is expressed in °C/W, and lower thermal resistance indicates
better cooling performance.
Technical Complexities, or What Does All This Tell Us Anyway?
The test procedure outlined above explains what we do, but it doesn’t
explain why we do it that way, or what the end results tell us. It turns
out that thermal testing is quite complex, and, although our test procedure
is simple and repeatable, it glosses over a few issues that need to be addressed
to understand what’s going on here.
Accuracy of CPU Thermal Sensors
First of all, there is virtually no way of knowing whether the thermal sensor
in our test CPU is accurate, and no reliable way of calibrating it in the
likely event that it is wrong. There, we said it: Our testing produces inaccurate
results. There is plenty of technical documentation out there that explain
how accurately testing CPU temperature is practically impossible. One of our
favorites is a piece from Arctic Silver, entitled Why
Thermal Measurements are Not Valid.
So, why do we bother testing at all? Fortunately, accuracy, in absolute terms,
is not what really matters in heatsink testing. What we want is not a tool
that tells us that our test chip is exactly 42°C, but a tool that
detects fluctuations in temperature and produces consistent results under
similar thermal conditions. And it turns out that the thermal sensor on the
CPU works just fine for these purposes.
Consider your bathroom scale. Chances are, it has a small notice on the back
that says not legal for trade. That’s because the accuracy of most
bathroom scales is not considered good enough (or, it’s not certified
to be good enough) to yield the same result as government-approved, trade-legal
scales. However, that doesn’t mean it can’t tell you when you gain or lose
weight. That’s because, as long as you always weigh yourself on the same scale,
it will always produce a higher result when you gain weight, and a lower result
when you lose weight. It can also tell you whether you weigh more or less
than your wife, your best friend, or your dog. It can even tell you how much
the difference is, though perhaps not with quite as much precision as a better
Heatsink testing doesn’t require exact numbers. What matters is how a heatsink
performs in comparison to other heatsinks, not what CPU temperature
it achieves on our test bed. And, as long as all heatsinks are tested using
the same test bed, it is possible to make valid comparisons between them without
ever knowing exactly how hot the CPU was — just like it’s possible to
use the bathroom scale to gauge changes in your weight without knowing whether
it is giving you exactly the right number. In fact, even if they were accurate,
the actual thermal results would be useless on their own. All they tell us
is how hot our specific test bed was during the test, but unless your
system runs exactly the same parts, in exactly the same thermal conditions
(i.e. on an open test bench at ~21°C), and you can guarantee that your
thermal measurements are 100% accurate, these numbers won’t tell you how the
heatsink will perform in your system.
How do we go about converting the inaccurate thermal measurements into valid
comparisons between heatsinks? We do two things: All tests are done on the
same test bench, and comparisons are based on thermal rise to avoid errors
based on different ambient temperatures. On its own, this is enough to evaluate
any heatsinks that we test. Heatsinks with the lowest thermal rise are the
However, we attempt to go one step further, in order to make the result useful
to you, our readers. Thermal rise tells us how a heatsink performs versus
other heatsinks that we’ve tested, but not how it compares to your
heatsink. Thermal resistance, on the other hand, factors our test bed
out of the equation. In theory, the thermal resistance for a given HSF running
at a specific fan speed should never change. If you can determine the thermal
resistance of your heatsink, you should be able to tell which heatsinks
will be better performers based on our testing.
Of course, the reality is a bit more complicated than that, mostly because
it’s difficult for a casual (or even a not-so-casual) user to calculate thermal
resistance. Essentially, it involves duplicating our test procedure —
including measuring the amount of power consumed by the CPU and hoping that
minor differences in VRM efficiency are not enough to compromise the results.
On top of that, variables such as system airflow (which is not taken into
account by our test bench) and the aforementioned accuracy of the onboard
thermal sensor can also affect results.
Despite the difficulties in making good use of them, we shall continue publishing
the thermal resistance results as we have since we began testing heatsinks.
If nothing else, thermal resistance is still the most "correct"
way of expressing how well a heatsink cools.
One final source of variance is worth mentioning: VRM efficiency. Our measurements
of CPU power — and the thermal resistance results that are derived from
it — include power losses in the VRMs. As a general rule, VRM efficiency
does not change significantly between tests — though VRM efficiency can
vary quite a bit from board to board.
However, there is one specific instance when VRM efficiency can affect
our results. Like any other electronic component, the VRM efficiency begins
to drop once it is above a certain temperature. If the VRMs are not cooled
adequately, the power losses in the VRMs increase and the total amount of
heat that must be dissipated by the heatsink goes up. Because the major source
of heat near the VRMs is the CPU, the VRMs often overheat when the CPU is
undercooled, but they can also overheat in a system with poor system airflow,
as outlined in the yellow box below.
Obviously, this increase in power draw makes our thermal resistance results
invalid, since they are calculated on the assumption that the CPU and VRMs
draw 78W. For this reason, AC power is monitored during testing, and if it
increases above normal (120W under load), the change is noted and CPU power
consumption is re-measured for the relevant data points. This increase in
power consumption is unhealthy, and it’s unlikely that a heatsink that demonstrates
this kind of variance will be highly praised by SPCR.
Motherboard makers generally assume a certain level of “spillover” airflow from the heatsink fan across the voltage regulator module (VRM) components that are placed around the CPU socket. These components include capacitors, power transistors and inductors (coils). When the CPU fan speed is reduced to minimal levels in order to achieve low noise, cooling for the CPU may be perfectly adequate with a good heatsink, but the VRM components may be prone to overheating, which can impair electrical efficiency and reduce component life.
Tall tower (or high rise) heatsinks with fans that blow air parallel
Users should be aware of this potential issue and ensure some additional
So, there you have it. You’ve waded through four pages of theory, and, hopefully,
you’ve learned something. We realize that hearing about the nuts and bolts of
heatsink testing isn’t everybody’s cup of tea, so here’s something for the rest
of you: We’ve re-tested four of our favorite heatsinks so that we have a frame
of reference when we test our first heatsink on this new test bed. So, here’s
a quick summary of how the heavyweights performed.
The four heatsinks tested were:
- Scythe Ninja (a classic)
- Thermalright XP-120 (a past
- Thermalright Ultra-120 (the
- Zalman CNPS9500 LED (the
high end offering from a respected company)
All of the heatsinks except for the Zalman (which has its own proprietary fan)
were tested with our reference 120mm fan. The results are presented below:
Ambient temperature during testing was 21°C.
Reference Heatsink Roundup
This is not the place for extensive analysis, but a few results stand out:
- The Zalman appears to be the best performer… until you realize that its
stock fan produces much more airflow and noise at the various voltage levels
than the reference fan does. At full speed, this difference is 37 dBA@1m vs.
22 dBA@1m. The stock fan doesn’t reach 22 dBA@1m until it is run at 5V…
- The XP-120 is clearly outclassed, and was beaten soundly by the rest of
the heatsinks. Its day is past…
- As we saw in the review of the Ultra-120, it and the Ninja are very close.
Except, this time, it was the Ninja that came out on top. Consider this a
warning against making a big deal out of a degree or two.
* * *
Articles of Related Interest
SPCR’s Fan Testing Methodology
Back on Top with the Ultra-120