Spoiler blocks? Tables?

Do your testing here. Do NOT post tests anywhere else or we'll slap you silly.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee, Edward Ng

Post Reply
Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Spoiler blocks? Tables?

Post by Wild Penguin » Tue Feb 23, 2016 7:24 am

Testing Testing....

I'm consider about posting some results of a test I've run (related to cooling) and testing I'm just testing some features I'd find useful :P

Why are there no tables on this forum? Or are they, maybe just hidden? I could always just export a table into a picture, but that feels like an overkill for just some text with formatting. Instead I guess I will use monospace code blocks!

Code: Select all

COLUMN 1        COLUMN2        COLUM3         COLUMN4
55°C            56°C           68°C           53°C
80°C            NAN            87°C           77°C
(More details below)

Why are there no spoiler blocks on this forum? Or are there, but maybe they are called something else?
I mean something like this:

[spoiler="Click for details"]
BLAH BLAH BLAH YADDA YADDA some details no one wants to read anyways.

I just took a dump... TMI sorry :)

I could also put some secrets or *gasp* spoilers here!
Secret picture here!
Last edited by Wild Penguin on Thu Feb 25, 2016 9:52 am, edited 6 times in total.

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Tue Feb 23, 2016 7:27 am

I guess I could always put the less important details in a subsequent post (or posts in case there is a lot of them).

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Tue Feb 23, 2016 7:32 am

Something like this could be useful, in case some admin is reading this and want's to add stuff at some point in future...

[th]Test A [/th]
[th]Test B[/th]

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 9:40 am

Code: Select all

                        Some res            |            [SPOILERS!]        
            Amb     min     avg     max     |Amb     min     avg     max     
Sum:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0
aaa:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0
bbb:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0
ccc:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0
dog:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0
cat:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0
cpu:        526,0   695,2   1211,4  1464,4  |536,0   713,8   1225,8  1488,0

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 11:05 am

Test Hardware:
  • Case: Antec Fusion
  • Case fans: 2 x Scythe clear plastic case fans (PWM controllable, but voltage controlled for this test)
  • Power Supply: Seasonic 550W
  • Motherboard: Asus Maximus VII Gene (mATX)
  • CPU: Intel Core i7 4970K (stock clock, adaptive undervolt -0.070V)
  • CPU cooler: Noctua NH-C14, 1x voltage controlled 140mm fan on the bottom
  • Memory: Kingston HyperX, 2 x 8GB
  • Hard drive(s): Samsung EVO 860 500GB SSD (system), WD Digital 3TB, Seagate 3TB
  • Graphics: EVGA GTX 970 ASC 2.0 SC (04G-P4-3975-KR)
  • Other: Octopus PCIe bridge + dual DVBC/T2 tuners, BlueRay drive
  • OS: Arch Linux, 64bit, KDE
Test procedure:

I originally planned on doing the tests on stock CPU settings on stock voltage. But I forgot to disable my undervolting (I have undervolted with adaptive mode offset -0.070V for better thermal performance and hence less airflow=noise needed for cooling) before running the tests on PK-3, so I chose to run the tests on undervolted setting.

I had only a vague idea what kind of tests would be representative. So I made a script to facilitate and automate running test, and gathering data. After doing some preliminary testing with the help of the script (over a period of several months – I had lost interest into this in between), I chose some test conditions that I think represent common use cases for this kind of system.

The test conditions, each running for 30 minutes, except Kernel Compilation which was done twice in succession, were:
  1. Idle
  2. OGG encoding with two threads
  3. Compiling a Linux Kernel (make -j8) 2 times
  4. Folding@Home
  5. Stress testing with burnK7 x 8
Between tests, 7 minutes (total) was programmed for cooling down.

In addition, I chose several different kinds of fan settings, which I consider something someone might use for quiet cooling, or were otherwise interesting.

The fan settings used for the test were:
  1. All fans @5V
  2. All fans @7V
  3. All fans at full speed (12V)
  4. My own fan speed program, target CPU temperature 75°C
  5. Asus BIOS controlled fans set at a custom profile
Factoring in the cooling time between tests, running such a test array will take (and took) 13 hours!

Fan voltages were controlled by the MB, but I determined which value was needed for a certain voltage with the help of a multimeter.

As each test array was automated by a script, the tests were as identical as possible from the point of starting the test.

Data gathering and endpoints

I really didn't know what data would be representative in beforehand. So I chose to record all that lm_sensors can record =). I configured sensords so that all data was recorded into an rrd database once every second. I automated generation of charts for the following parameters, which I concluded would be most interesting:
  • CPU core temperatures (x4),
  • CPU temperatures (from the MB)
  • PECI Agent 0 temperatur
  • CPU fan
  • Case fans (x2)

But thanks to rrd database, I can - in case I need / want to - look at anything else in hindsight (I took backups of the whole rrd database after testing). The same data as in the charts was also extracted into csv files for usage in spreadsheets etc.

Sadly, I was too lazy to find a way to store load average values, which would have been a nice thing to have. But I made every effort to make the testing conditions consistent, and only changed the TIM between the testing sessions.

If during any test, the CPU temperature would rise to 89°C, the test would be aborted and cooling considered failed, and the time the test took to cause an overheat was recorded. However, the script I used for testing overheating polled only once in 10 seconds.

I think it was an obvious choice to compare the CPU temperatures between different runs, but all data available to lm_sensors was recorded, if I wanted to look at anything else later on. CPU temperature is available trough several channels: Intel coretemp driver, acpi driver and MB monitoring chip (in my case nct6791d). I'm actually not sure which one is most accurate, but I presume any could be used to compare between different runs. I chose to use data from the Intel coretemp driver. For current comparison chart, I have averaged alll 4 cores temperatures provided by the Intel coretemp driver (min, avg and max values during any testing run). This choice was made after collecting all data, and was based at my whim.
Last edited by Wild Penguin on Thu Feb 25, 2016 1:02 pm, edited 1 time in total.

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 11:59 am


Prolimatech PK3:

There isn't much to say about using PK-3. I heated PK-3 in hot water in a plastic bag (since PK-3 is quite thick and I red somewhere this should facilitate handling it). A large grain of the PK3 was put on the CPU (not spreaded!), and NH-C14 was installed. The included Noctua thermal paste was not used, since I had chosen to use the Prolimatech PK-3 instead before before buying the cooler.

System had some regular usage (for a few weeks) with this setup, including folding, before the test array was run – so in case PK-3 requires curing, it had happened before test. After running the tests, the heatsink was removed and I proceeded into changing the TIM for the Indigo XS.

Here are some pictures of the TIM interface pattern at removal: [Stay tuned for pictures!]

I believe I may have used slightly too much of PK-3, but otherwise it looks OK – though I haven't installed that many CPU heatsinks that I'd have loads of personal comparison points.

Indigo XS

Indigo XS was installed according to the installation instructions provided by Enerdyne. The reflow procedure was not as easy as I thought, and there were a few complications – details below (sorry for the lengthy report!). In short, in my case, on the 4790k and Asus Maximus VII Gene, reflow process seems to happen too easily and too quickly just by entering BIOS before a chance of booting into an OS! I newer saw the reflow pattern, though I believe I got proper reflow on my second attempt, and at last partial one the first.

Before installing Indigo XS (with the Prolimatech PK-3 setup) I made sure I had stock voltage and CPU settings in BIOS. I left fans connected to the MB, since I planned on stopping them via software and running burnK7 in Linux. But at first boot, BIOS insisted that I had a new CPU installed (I had removed it to remove the old TIM properly from the heat spreader) – and just by entering BIOS, the CPU temperature was at 87°C! I believe this is probably because Asus Maximus VII BIOS does not use Intel P-states during bootup, as all OSes do instead. After booting into Linux, temperatures had dropped somewhat – to around 40°C, and this was idle temperatures! I didn't look at the situation for very long since it didn't occur to me that reflow might have already happened (and, I was not actually familiar what to expect for idle temperatures, as I had not run the CPU with stock voltages for a very long time).

So I proceeded with the ReFlow procedure as intended, and started ksysguard for temperature monitoring and burnK7 (8 instances). Ksysguard was showing my CPU temps, which shot up to 98-100°C immediately after running burnK7! I run i7z and saw that the CPU cores were all throttling, as intented to protect themselves. I initially run burnK7(x8 threads) for four minutes, but my nerves couldn't take it anymore. Temperatures dropped very fast after stopping BurnK7s, but not to as low level I'd expect. I re-read the instructions, which recommended up to 14-15 minutes for air coolers. So I re-ran it after a few minutes, this time for 15 minutes.

I never saw the reflow pattern, so I suspected an abnormal reflow. So I removed the heatsink, and this is the result:

[PIC of the CPU on the MB, will add later]

The red outline has missing TIM (just addeed because I suck at taking photos, the lightning is bad, and I have a bad camera).
Yellow outline is TIM outside the heat spreader on the plastic retainer of the Indigo XS.

[pic of the removed TIM, carefully peeled on my mousepad]

The small holes were intact before peeling, but got stuck on the CPU heat spreader. The larger "gap" at the corner is the place of improper reflow.

So: it seems reflow had occured, but some of the metal is on the plastic, and a corner of the CPU heat spreader is bare. I thought that maybe reflow was abnormal because of interruptions in heat output (first heat spike caused by BIOS, and them maybe I stopped burnK7 too early on the first run, just as reflow was about to happen, and missed the change to see the pattern).

Second attempt:

So, change of plans for second run: I decided to disconnect all fans, put some rags inside the case, block all vents and re-try, this time with more controlled and consistent heat output without interruptions.

This time I didn't remove the CPU to skip any new-CPU installed messages by BIOS, so to minimize time spent in BIOS. I removed the small flakes of TIM stuck on the heat spreader, which was quite easy with the Enerdyne provided solvent; but it is always possible to get the flakes on the MB, however, so beware! I recommend taking the CPU off the MB while cleaning.

After booting into BIOS, it complained about fan failures! I had forgotten to disable fan monitoring in BIOS, and after I did that, BIOS was complaining about overheating CPU and not lettting me boot into Linux! I saw that the CPU was pegging around 89°C according to BIOS! I somehow managed to bypass the error (exit BIOS without saving) and boot into Linux, but: again, same thing: no reflow pattern after running burnK7 for 15 minutes.

Post-Indigo XS installation

So, now my hypothesis was that the reflow had occured too fast, even before I had time to run burnK7 in Linux, at least on my second run, but perhaps even on the first run. I let the computer cool down, rebooted, and did a burnK7 test with stock CPU settings and fans on – and, lo an behold, I seemed to get results indicating an adequate thermal interface! Nothing earth-shattering performance-wise, but adequate – also I hadn't run this CPU with stock voltage that much, so I was not sure what to expect. I confirmed my finding by re-enabling my undervolted profile, with which I was already familiar with, and noticed that temperatures were on-par with my previous installation, so I proceeded with the tests I had done on PK-3.

This is no guarantee that reflow had actually occured optimally. It is just good enough (seemed at least as good as Prolimatech PK-3). This is something that should be taken into account when interpreting the results! Since the TIM is working at least as well as the PK-3, I'm not going to disassemble the heatsink just yet. But in case I do later, I will update here with the findings of the reflow pattern of the second installation.

Since I had setup sensord in the background even during both of there ReFlow boots, I checked up my first attempts data later on. In hindsight, it seems that I could have achieved similar temperatures even with the first reflow attempts, but since I was worried about the processor, and suspected abnormal reflow, I didn't have the guts to try it for more than a few minutes. I chose to take a look at the TIM instead and see what had happened.

In hindsight, there are a few things one might want to take into consideration before installing Indigo XS (or equivalent):
  1. In case there is a BIOS setting for CPU power state or clock speed during bootup, choose the lowest setting In my case I had chosen "Boot Performance Mode: Max Non-Turbo Performance" = 4000MhZ. Had I chosen "Max-Battery" (=800MhZ) instead, this might have prevented unintentional ReFlow before having a change of booting into an OS. Some BIOSes might choose to use P-states even during bootup, which may eliminate this kind of issues - after all, the CPU would be idling 99% of the time at bootup.
  2. Make all the effort to not cause BIOS to spit out warnings (as in new CPU installed, fan failures) before installing Indigo XS, again to minimize time spent in BIOS, so as to not have ReFlow happening in BIOS. This way you have better chances of booting into an OS to make a controlled reflow with your chosen utilities
In my opinion, these should be added to the Indigo installation instructions. I haven't contacted them as of yet, though.

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 12:03 pm

More about testing procedure and configuration

The tests were done in KDE5. I shut down all other programs while doing testing, and I tried to make all other conceivable measures to make sure the test array runs were as comparable and consistent as it was feasible for me to do. I disabled my DVB recoding software, but left sensord running in the background for data gathering purposes. Aside from the GUI, there were no other CPU consuming processes left (at idle, X.org was the most CPU consuming process at < 1%)

Some details about specific tests:
  1. Idle test was done berore all other tests, to add some consistency between runs.
  2. OGG encoding was done in a folder in RAM disk, to remove all disk I/O effects on the tests. Two of oggenc instances were running constantly during the test (to utilize two cores). I accidentally left some very small wav files I didn't intend to in the test files folder for the Prolimatech PK3 -setup, and deleted them for the Indigo XS setup. This caused a small inconsistency on the test, since there is an overhead for restarting the oggenc processes. This causes a different kind of temperature graph pattern in the pictures.
  3. Kernel compiling: The Linux kernel was compiled twice in succession and the temperature during the second test would be used as the result. I though this is necessary so that the kernel sources would be cached into RAM by the Kernel, to make sure the conditions were similar between tests, but in reality I believe the difference in negligible, if any.
  4. Folding@home was run with 8 threads (2x4 CPU slots, since FAH has problems providing 8-threaded workloads) and a GPU slot for CUDA/Nvidia. This was the only test that also used the graphics card. This test causes the most CPU heat generation – probably because data is transferred to the graphics card (the heat generated is noticeably more than BurnK7!).
  5. Burnk7 is the Linux equivalent of CPUBurn. 8 instances of the burn program were run during the test.

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 12:13 pm

More about the fan settings chosen:

Ambient temperature in the testing room was recorded during the test, and is provided with the results. However, the temperature sensor I used only has 1°C of resolution! The tests were run roughly on the same time of the day (overnight and during a working day over the 13-hour period!).

Comments about specific fan settings:
  • 1, 2&3 (Fixed speeds): For static fan speeds, all fans were connected to the MB. Linux allows quite nice voltage control (or PWM control, if it is chosen instead of voltage) in 255 steps (0-254) for each fan connector provided by the BIOS and/or controlling chip. The actual voltage level is not available directly anywhere (from Linux), so instead I measured it via a multimeter, and determined the PWM value required for a certain voltage level (I was too lazy to make an adapter to connect to the PSU directly, and this seemed way easier). I found that the values for my MB were roughly: 5V=102, 7V=143. I also determined values for 9V=183 and 12V lower limit (=243) while I was it, just because I found it interesting.
  • 4 (Custom program control): I had decided that 75°C was a good compromise between quieteness and my HW longevity previously when I used a Ninja Mini for cooling the same CPU. However, with The NH-C14 fans would never ramp up with that temperature target – except while folding (not even with cpuBurn)! The program I use has a very dumb algorithm. If the temperature would rise above set point, fan speeds will increase; and vice versa. This may cause temperature oscillations, but choosing a sensible delta value and a range for the temperature, will alleviate the issue oscillations. For this testing setup, a very simple setup was chosen.
  • 5 (BIOS -controlled) I ran this setup mainly for comparison with case 4). All fans were calibrated with the BIOS's Q-Fan feature and then set to a manual profile I like and should be quite quet, and ramp up around the same as my custom programs target temperature. However I didn't make a lot of effort tweaking this setup, and just included it as a bonus.

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 12:45 pm

The Results:

Failure means that a single cores temperature had reached 89°C.

I couldn't decide which values I should include, so sorry for this a little bit daunting table:

Code: Select all

                        Indigo X                        Prolimatech PK-3        
                Ambient min     avg     max     Ambient min     avg     max     
5v      idle    22      27,2    29,4    33,0    22      28,5    31,5    40,5    
5v      oggenc  21      27,1    46,9    59,6    22      28,3    47,2    60,5    
5v      k.compil21      28,0    45,0    62,3    22      29,5    46,4    63,6    
5v      fah     21      FAILURE at 27m21s       22      FAILURE at 17m1s    
5v      burncpu 21      32,4    63,3    69,8    22      32,4    64,7    72,1    
7v      idle    21      24,9    27,0    36,3    22      26,2    28,6    37,6    
7v      oggenc  21      25,3    42,8    55,1    22      27,0    43,9    56,6    
7v      k.compil21      25,4    42,5    56,9    22      25,9    43,1    58,1    
7v      fah     21      25,9    62,9    72,6    21      26,7    64,3    74,0    
7v      burncpu 21      29,3    57,3    63,0    22      29,7    58,4    64,3    
12v     idle    21      23,7    25,4    32,2    22      24,1    25,8    31,7    
12v     oggenc  21      24,2    40,1    52,4    21      24,4    40,5    53,4    
12v     k.compil21      24,5    40,1    53,2    21      24,5    40,4    53,9    
12v     fah     21      24,5    55,8    63,2    21      25,1    56,7    64,0    
12v     burncpu 21      25,6    53,3    59,0    22      26,5    54,3    59,8    
custom  idle    21      28,1    29,7    34,7    21      28,2    29,8    33,6    
custom  oggenc  21      28,0    48,9    62,4    21      28,5    48,9    61,9    
custom  k.compil21      30,2    49,4    67,5    21      30,2    49,8    68,8    
custom  fah     21      31,6    67,3    78,7    21      31,9    67,5    79,2    
custom  burncpu 21      35,3    68,1    75,4    21      35,3    68,4    75,6    
bios    idle    21      26,8    28,4    33,6    21      28,4    29,7    31,5    
bios    oggenc  21      27,2    46,9    59,2    21      28,9    47,1    60,0                                         
bios    k.compil21      28,5    45,9    60,9    21      29,5    47,1    62,5                                         
bios    fah     21      29,5    63,7    72,0    21      30,5    64,3    72,2                                         
bios    burncpu 21      33,3    59,7    65,0    21      33,8    60,2    65,5
In my opinion:
  • Minimum temperature value is not representative in many cases, as it has usually occured at the start or end of test, and is kind of random because variations in measured temperatures.
  • For idle tests, I believe avg value is most representative (some process could cause CPU spikes and therefore max value is not representative).
  • For other tests, max value is the most representative.

Some excerpts from my automatically generated charts:

The white horizontal lines mark the start end end of the actual test. All shown values were calculated using data between these timepoints.

Folding with Indigo XS, fans @5V


Folding with Prolimatech PK-3, fans @5V


BurnK7 x 8 with Indigo XS, fans @5V


BurnK7 x 8 with Prolimatech PK-3, fans @5V


Folding with Indigo XS, fans controlled by my custom program:


Folding with Prolimatech PK-3, fans controlled by my custom program:


I'm not going to spam this forum with pictures too much, however. Rest of the automatically generated charts are at my flickr collection page (link)!

In case someone wants some other figures, ask and I will look into it if/when I have the time to provide them.

Wild Penguin
Posts: 82
Joined: Mon Jul 01, 2013 3:30 pm

Re: Spoiler blocks? Tables?

Post by Wild Penguin » Thu Feb 25, 2016 1:01 pm

Conclusions and final thoughts:

Main conclusions: I think the Indigo XS is on par with the PK-3. There is a small difference in favor of PK-3, but that difference is mostly < 1°C. This could be explained by changes in my testing room temperature only!

Installing the Indigo XS was somewhat difficult and nerve wrecking when taking into account the reflow procedure. Also, it is possible, that the reflow that occured is still not optimal, although the TIM performance is adequate. If / when I dismantle this setup, I will update this thread and post pictures about what the TIM looks like.

As for the price: most readers probably think that 20€ for two applications is a bit steep, but I think it is not that much considering that conventional thermal interface tubes tend to get lost before they have been used up =). However, one might as well delid the CPU and use liquid metal under it, since I believe delidding and changing the TIM there has probably way more effect on generated heat (and subsequent noise) than using Indigo XS.

I should have used something with more resolution for ambient temperature measurements, but my MBs auxiliary temperature sensor has only 1°C resolution. From this data, one can only guess the actual temperature difference, which could be anything from 0.1°C to 1,8°C! But I needed something with automation for these testing runs, since the tests would take 13 hours, and there was no way I was going to sit around monitoring all that and taking ambient temperature readings! However, for measuring small differences, more accurate resolution in the ambient temerature would be needed.

Choosing a smaller test array would have been more sensible, and this whole procedure was a bit daunting! Also I have doubts if anyone reading this finds these posts having any point :) But this was certainly fan to do and I learned a lot by doing this, and now I have some scripts that facilitate doing similar tests in the future (they are way to dirty in their current condition for publication and need some cleanup, but in case anyone is interested, I will share them if requested).

My current custom programs algorithm tends to cause fan speed / temperature oscillations. This happened in the Indigo XS test, which ruined a change to produce a comparable result with PK-3 (there was oscillation even there but it settled at a kind of plateau). The algorithm could be easily improved to give a better end result, to determine if changing the TIM (or any part of the cooling solution) will actually cause a reduction in RPM, and hence dB drop (at a certain desired CPU temperature). Also, my originally chosen temperature range was too high for the more efficient NH-C14, to give any meaningful results for fan speeds (which was my intention) - instead I got yet another temperature readings for a very low (<5V) fan setting. (I had done my preliminary testings mostly on a Ninja Mini, which was less efficient).

The detection of overheating on my test was a bit of unfair (if comparing to another failed run), since temperature measurements have some noise in them. This will cause some randomness in the actual trigger time in case of inadequate cooling based on pure luck.

As a related side-note, I've heard that temperature oscillations can be dangerous in the long term for HW, so it is worthwhile to minimize them in a production environment.

Post Reply