Posted: Sun Nov 16, 2003 5:29 pm
Thank you scalar for posting this interesting information...
Discussions about Silent Computing
https://silentpcreview.com/forums/
Most 3.5" desktop drives suggest no higher than 55C. Most notebook 2.5" drives suggest no higher than 60C. The nice thing about notebook drives is that they generally consume far less power, thus produce less heat too. Producing less heat, while tolerating more heat is a very good combination to have.shadow947 wrote:Hi anybody knows if the notebook hardisk temp are the same than the desktop pc ? how much heat can take a notebook hardisk? normaly this type of disks doesn't have a good airflow....
I took statistics class, and that formula isn't making any sense to me because it has no way of accounting for variability.dukla2000 wrote:al bundy (et al) - Sure in the past/historically/actual data very few (if anyone) have actual heat related hdd failures. But as per investment performance, past experience is no guarantee of the future.
Not necessarily. My stats is virtually zero, but I remember somewhere a good post on what MTBF really means, and this Googled result is more or less what I remember. Now I can't work the arithmetic in the examplegrandpa_boris wrote:... that's over 68 years of operation. ...
R = exp(-43800/250000) = 0.839289
But bottom line, with a 7200.7 over (say) a 4 year life then the stats is actually saying there is an x% (say 92%? - I can't interpret what exp function is in that equation!) probability my drive will last that long.
By looking after the drive environment I am trying to increase the probability of no failure. Coming back to the 'bad' environments: again the stats is only saying the probability is lower you will survive, not necessarily zero. In no way am I finger pointing or asserting your drive WILL fail: it is just an inner smuggness that I believe my drive has a better chance of lasting 4 years than yours.
[edit] ps - managed to work the arithmetic: it is natural log (e) based. So for 600000 MTBF, 4 year life, probability is 94.3% of operation without failure. [/edit]
worse yet, a simple statistical model is inaplicable. disk failures aren't linearly distributed. they follow a bathtub-shaped curve. disks either fail shortly after being deployed, or run well for a long time -- and then all disks from the same batch fail almost at the same time. i have actually seen this happen in real life. it ain't pretty.Elixer wrote:I took statistics class, and that formula isn't making any sense to me because it has no way of accounting for variability.
So if I buy a bunch of the same harddrives at once and they all happen to be from the same batch, then I put them into a RAID5 array thinking I'll be fine as long as no more than one drive fails at one time, I might be surprised down the road? That would suck.grandpa_boris wrote:and then all disks from the same batch fail almost at the same time. i have actually seen this happen in real life. it ain't pretty.
that's what the numbers i've seen (and referenced) suggest. as i have mentioned, i've seen this happen in real life with enterprize-grade SCSI disks. however, it takes time for a disk to fail. if you monitor the SMART info from your drives, you'll be able to detect deterioration before it spreads and becomes fatal. if you quickly replace (and resync) the failing or about to fail drives, you should be able to get through the "mass die-off" with little trouble.shunx wrote:So if I buy a bunch of the same harddrives at once and they all happen to be from the same batch, then I put them into a RAID5 array thinking I'll be fine as long as no more than one drive fails at one time, I might be surprised down the road? That would suck.
Other than the tempearture, what other things in particular should users look for in order to detect potential failures? I've merely used MBM to check out temperatures.grandpa_boris wrote:if you monitor the SMART info from your drives, you'll be able to detect deterioration before it spreads and becomes fatal. if you quickly replace (and resync) the failing or about to fail drives, you should be able to get through the "mass die-off" with little trouble.
look for accumulations of successfully retried read and write errors, which SMART firmware in the disk drives keeps stats on. SMART readouts, at some point, start suggesting disk replacement. it's quite unambiguous.shunx wrote: Other than the tempearture, what other things in particular should users look for in order to detect potential failures? I've merely used MBM to check out temperatures.
I didn't know it was possible to do this. What software could we use to get the info? Thanks.grandpa_boris wrote:look for accumulations of successfully retried read and write errors, which SMART firmware in the disk drives keeps stats on. SMART readouts, at some point, start suggesting disk replacement. it's quite unambiguous.
DTemp will certainly do it.shunx wrote:I didn't know it was possible to do this. What software could we use to get the info? Thanks.grandpa_boris wrote:look for accumulations of successfully retried read and write errors, which SMART firmware in the disk drives keeps stats on. SMART readouts, at some point, start suggesting disk replacement. it's quite unambiguous.
Arrhenius’ rule is from chemistry... A 10C change in temperature doubles (or halves) a chemical reaction rate. Rules of thumb are stretched in all directions. For reliability, the temperature delta is important, but not nearly as important as the temperature delta divided by the time to change the temperature (dT/dt).MikeC wrote:Well there is if you consider the S.M.A.R.T. temp, which is off the internal temp diode, to be a reasonable representation of internal drive tmep. This would naturally include the effect of ambient temp.Jan Kivar wrote:...there is no simple high limit for drive temperature...
Regarding the "10°C rule for electronics/mechanics": A temperature rise of 10°C will halve the expected lifetime.
There is some question about where this originated. I recall reading somewhere about it being pulled out of the air by some smartass contractor for the US military who wanted to sell more cooling electronic gear?... Probably totally distorted.
Speaking seriously, we can’t help recalling Arrhenius’ rule from the U.S. Department of Defense Military Handbook 217 (this book used to be the court of first instance in all questions concerning electronics reliability). This rule suggests that for the temperature range from –20 to 140C every temperature drop by 10C doubles the life term of the equipment. Military Handbook 217 is no longer used nowadays and the rule shouldn’t be taken directly as is. For example, temperature may vary in different parts of a single PC case. Still, the book had its truth. High temperature of a chip doesn’t tell well on its life term.
I came upon this forum while googling safe hard drive temperatures. We have similar debates on the Apple boards about whether the iMac G5 runs too hot and the ramifications of hard drive temps.exrcoupe wrote:So after all this debate, it still is left undecided and it's left up to what you're comfortable with? But it seems that the general concensus is that 55C is a safe range correct?