Was 3.7TB - then 2.7TB, now 19.1TB "quiet" server
Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee
samba version is:Wibla wrote:Or his samba version... mine is working just fine, easily saturating gigabit
Code: Select all
net-fs/samba-3.0.28
Code: Select all
$ aptitude show samba
Package: samba
State: installed
Automatically installed: no
Version: 3.0.24-6etch9
A few tweaks later, and my performance is as follows:
Also swapped out the Noctua 120mm fan in the top 4in3 module for a Scythe S-FLEX 120mm / 1200rpm unit, hdd temps are abit more reasonable now:
Code: Select all
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
Oberon 4G 119643 19 70150 11 417908 28 103.5 0
Oberon,4G,,,119643,19,70150,11,,,417908,28,103.5,0,,,,,,,,,,,,,
Settings:
Oberon:/mnt/raid# blockdev --getra /dev/sda; cat /sys/block/sda/queue/max_sectors_kb; cat /sys/block/sda/queue/nr_requests; cat /sys/block/sda/queue/scheduler
65536
64
256
noop anticipatory [deadline] cfq
Time to upgrade this server soon, depending on the paycheck i get on the 22nd..
Upgrade path:
Tagan 480W -> Tagan 480W + Corsair CX400
Asus P5WDG2-WS -> Asus P5E WS PRO
Celeron 3.2GHz -> E5200
2GB ram -> 4-6GB ram
10x500GB -> 5x1.5TB (maybe 4, not sure yet)
30GB WD -> 80-120GB Seagate 2.5" SATA (system drive)
Keeping the 3ware controller, and considering windows server 2003/2008 instead of debian linux, or vmware of some sort.
I'm not 100% certain about how good the seagate 1.5TB drives are, so this server will be considered a beta test box for a while.
This setup should give me 5.44TB or 4.08TB usable in raid5, and is expandable to 13.6TB usable space (11x1.5TB + hotspare).
Upgrade path:
Tagan 480W -> Tagan 480W + Corsair CX400
Asus P5WDG2-WS -> Asus P5E WS PRO
Celeron 3.2GHz -> E5200
2GB ram -> 4-6GB ram
10x500GB -> 5x1.5TB (maybe 4, not sure yet)
30GB WD -> 80-120GB Seagate 2.5" SATA (system drive)
Keeping the 3ware controller, and considering windows server 2003/2008 instead of debian linux, or vmware of some sort.
I'm not 100% certain about how good the seagate 1.5TB drives are, so this server will be considered a beta test box for a while.
This setup should give me 5.44TB or 4.08TB usable in raid5, and is expandable to 13.6TB usable space (11x1.5TB + hotspare).
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
Wait.... which 3ware controller are you going to use? You had problems before with that motherboard and it's PCIe-PCIX adaptor chip (see here)...
Also, be aware that while Server 2008 is awesome (so many cool features!), you won't be able to get support for your 9550SE card or use Hyper-V with the E5200 (if you want to use virtualization). The 9650SE models are supported with the latest drivers - Link to KB article.
Also, be aware that while Server 2008 is awesome (so many cool features!), you won't be able to get support for your 9550SE card or use Hyper-V with the E5200 (if you want to use virtualization). The 9650SE models are supported with the latest drivers - Link to KB article.
According to Asus the problems should be solved, but I'm intending to test it in a normal pci slot aswell, as these are 66MHz on the newer mobos, which gives a max speed of 266MB/s, and that is actually enough.
I was planning to keep the 9500S, according to the KB it should work, just not be supported officially, which doesnt matter much to me, the "it works"-part is the key.
Edit: Might also get a Supermicro SATA/SAS controller and combine that with onboard ports, and then run mdraid in linux, or something on WHS... choices choices...
I was planning to keep the 9500S, according to the KB it should work, just not be supported officially, which doesnt matter much to me, the "it works"-part is the key.
Edit: Might also get a Supermicro SATA/SAS controller and combine that with onboard ports, and then run mdraid in linux, or something on WHS... choices choices...
That paycheck never worked out as it should, and my car had to be fixed, so i was broke for the last part of december + half of january. Then I decided to leave it alone for the time being...
But not anymore.
Two drives in the RAID5 array decided to go bonkers on the same time, leaving me with a degraded array, so no choice but dismantle the server to get the drives out, and rebuild it.
Must say the filters on the CM stacker really works, but still the insides of this server looks like hell...
ducttape, works wonders to keep the air going where it should, atleast in theory...
Very bad choice to install the psu like that, too bad it didnt fit above the mobo.
Dust...
Moar dust!
These worked as advertised, very happy about that.. next stop for the filters are some heavy duty cleaning, then reassembly.
With absolutely NO cable managment it was pretty easy to get the old psu out, didnt even have to remove the back panel...
6 drives off one molex, probably not the smartest thing in the world...
Edit:
Current status is: new PSU in, all power laid out and ready, with spare cables for more drives, cabled up for 10 SATA + 1 IDE, with provision for 4-6 more SATA if needed.
Drive cages removed, defective drives removed from drive cages, probably gonna reorganize the drives and remove the dust from each drive and cages at the same time. Have vacuumed the fans to get the worst dust off, might wipe them with moist cloth or something tomorrow to remove the rest...
Plans for the next couple of months is testing the 3ware as a pure SATA controller, with the remaining 8 500GB drives in mdraid6 on debian linux, I have good hopes that it will be faster than the old HW raid5, but in any case it'll only be used for playing around and an extra backup of important stuff, in the improbable case that monster fails.
Come september/october and the tax money I'm planning on getting new 1.5TB Samsung drives and an UPS so I can actually upgrade capacity overall, aiming for 18-20TB formatted, with monster being 9.1TB of that.
But not anymore.
Two drives in the RAID5 array decided to go bonkers on the same time, leaving me with a degraded array, so no choice but dismantle the server to get the drives out, and rebuild it.
Must say the filters on the CM stacker really works, but still the insides of this server looks like hell...
ducttape, works wonders to keep the air going where it should, atleast in theory...
Very bad choice to install the psu like that, too bad it didnt fit above the mobo.
Dust...
Moar dust!
These worked as advertised, very happy about that.. next stop for the filters are some heavy duty cleaning, then reassembly.
With absolutely NO cable managment it was pretty easy to get the old psu out, didnt even have to remove the back panel...
6 drives off one molex, probably not the smartest thing in the world...
Edit:
Current status is: new PSU in, all power laid out and ready, with spare cables for more drives, cabled up for 10 SATA + 1 IDE, with provision for 4-6 more SATA if needed.
Drive cages removed, defective drives removed from drive cages, probably gonna reorganize the drives and remove the dust from each drive and cages at the same time. Have vacuumed the fans to get the worst dust off, might wipe them with moist cloth or something tomorrow to remove the rest...
Plans for the next couple of months is testing the 3ware as a pure SATA controller, with the remaining 8 500GB drives in mdraid6 on debian linux, I have good hopes that it will be faster than the old HW raid5, but in any case it'll only be used for playing around and an extra backup of important stuff, in the improbable case that monster fails.
Come september/october and the tax money I'm planning on getting new 1.5TB Samsung drives and an UPS so I can actually upgrade capacity overall, aiming for 18-20TB formatted, with monster being 9.1TB of that.
Box is almost ready for boot.. just need to clean out the cpu heatsink, looking at replacing it for a xigmatek aswell, and probably swapping out the stone-age era 30GB for a 2.5" SATA.
Seems I've forgotten the old tricks from wiring up monster tho, meh. Not gonna waste more time on it.
Velcro tape to anchor down the system drive, also gives some vibration dampening.. (tho not alot)
Seems I've forgotten the old tricks from wiring up monster tho, meh. Not gonna waste more time on it.
Velcro tape to anchor down the system drive, also gives some vibration dampening.. (tho not alot)
Komplett.no has a weekend sale, with 2TB Seagate drives for 999 NOK, about USD $168 .. ordered 6, which is going into this server... stay tuned
(I just hope i get them, when i was finished ordering, the price was gone from the product info, and no stock info was shown.. meh)
That should bring my total storage up to about 19TB, if i run them in raid6, or 20.9TB if i run them in raid5.. time will show.
(I just hope i get them, when i was finished ordering, the price was gone from the product info, and no stock info was shown.. meh)
That should bring my total storage up to about 19TB, if i run them in raid6, or 20.9TB if i run them in raid5.. time will show.
I don't have any picture of the server yet since I haven't installed the last 4 HDD's, however, in the link in my sig, I used the case and motherboard from the P180B build. I also just rebuilt/upgraded my NSK-2480 for like the 3rd time since I've owned the case. I'll take some pics and make a post and link you from here.
-
- Patron of SPCR
- Posts: 744
- Joined: Tue Mar 04, 2008 4:05 am
- Location: London
- Contact:
monster is currently 99% full, and oberon (this box) is only used for backups atm.
This is a long term setup, with 19TB formatted (RAID6) it should last atleast 1.5-2 years (even when moving 8-9TB of data from monster to this server), long enough to replace the drives in monster with the current "cheapest per TB" drive out when the 19TB's are full
This is a long term setup, with 19TB formatted (RAID6) it should last atleast 1.5-2 years (even when moving 8-9TB of data from monster to this server), long enough to replace the drives in monster with the current "cheapest per TB" drive out when the 19TB's are full
Upgrade finally done...
The Seagate LP drives are not noisier than the T166 they replaced, atleast that is the first impression. The noisiest thing in the box is still the WD 30GB.
Code: Select all
/dev/md0 20T 5.5M 20T 1% /mnt/raid
The drives pull 5.5W idle (90W total), 6.8W average (110W total) and max current usage at boot is 2A per drive, so 24W per drive or 380W total. I'd probably get into trouble if I tried to run 20 drives, but 16 is just fine.
I'm planning on replacing the 3ware with two supermicro pci-x 8port controllers, as its getting a tad slow.
I also had to use a 64bit kernel to get the filesystem running, Linux 32bit has a limit on 16TB...
I'm planning on replacing the 3ware with two supermicro pci-x 8port controllers, as its getting a tad slow.
I also had to use a 64bit kernel to get the filesystem running, Linux 32bit has a limit on 16TB...
-
- Posts: 187
- Joined: Tue May 26, 2009 10:37 am
- Location: UK
WOW! i love that much space, my file server has 3tb which is enough for me, but this setup puts mine to shame.Wibla wrote:Upgrade finally done...
The Seagate LP drives are not noisier than the T166 they replaced, atleast that is the first impression. The noisiest thing in the box is still the WD 30GB.Code: Select all
/dev/md0 20T 5.5M 20T 1% /mnt/raid
what are you storing on this thing?
-
- Posts: 7
- Joined: Fri Jun 05, 2009 12:55 pm
- Location: Serbia
I'm not doing anything to compromise the stability of the cpu, as I'm running swraid I cant afford any oddities. The cpu fan isnt really audible outside the case either.
The WD 30GB is on the shortlist of stuff to fix, and im contemplating getting a Xigmatek 1283 for the cpu, as I did on the 9.1TB box. IDE vs SATA isnt really a big issue, I'm planning on putting in a Samsung 320GB IDE drive to replace the WD 30GB, just to get the noise level down, and because I need to reinstall the server to 64bit debian anyhow.
I wasnt planning on putting alot (more) cash into this setup, it was basically "toss the 500GB'ers, put in something larger, leave it at that"...
@mark: The world's largest collection of pron, obviously (or maybe not )
Backups is taking up more and more space, truth be told, I'm guessing atleast 8TB of the total capacity of this server will be backups from the old before I even start putting in new stuff on it...
Some performance numbers:
I think I'm being bitten by 64K chunk size from linux md, and the fact that the 3ware isnt strong as a pure SATA controller any which way you turn it.. will need some tweaking to break the 100MB/s barrier, first off will probably be a raid rebuild to 128K chunk size... And maybe a couple of supermicro sas/sata cards? hmhm, tempting! (ebay <3 )
The WD 30GB is on the shortlist of stuff to fix, and im contemplating getting a Xigmatek 1283 for the cpu, as I did on the 9.1TB box. IDE vs SATA isnt really a big issue, I'm planning on putting in a Samsung 320GB IDE drive to replace the WD 30GB, just to get the noise level down, and because I need to reinstall the server to 64bit debian anyhow.
I wasnt planning on putting alot (more) cash into this setup, it was basically "toss the 500GB'ers, put in something larger, leave it at that"...
@mark: The world's largest collection of pron, obviously (or maybe not )
Backups is taking up more and more space, truth be told, I'm guessing atleast 8TB of the total capacity of this server will be backups from the old before I even start putting in new stuff on it...
Some performance numbers:
Code: Select all
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
Oberon 4G 74787 16 68462 19 521149 53 113.3 0
Replaced WD 30GB PATA with Samsung 320GB PATA for much improved performance on system stuff, and alot less noise... Went from Debian Etch with Lenny-backports 2.6.30-amd64 kernel to Debian Lenny with stock 2.6.26-2-amd64 kernel, and to my surprise performance increased ALOT.
This is after I enabled write-back cache on the harddrives, and tweaked readahead and stripe_cache abit.. Trying to do the same on the old kernel netted me the result in the previous post, not exactly impressive.
End result: server is (almost) "ready" for production, noise levels are definently tolerable, and im happy
Code: Select all
Version 1.03d ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
oberon.lan.wibla 4G 118752 34 110140 38 491916 59 284.5 1
End result: server is (almost) "ready" for production, noise levels are definently tolerable, and im happy