New fileserver - 12x1TB, Stacker, Q6600, 3ware...

Got a shopping cart of parts that you want opinions on? Get advice from members on your planned or existing system (or upgrade).

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

New fileserver - 12x1TB, Stacker, Q6600, 3ware...

Post by Wibla » Wed May 14, 2008 4:05 pm

Hiya!

I just got a good offer on 1TB Samsung F1 drives ($200/pcs), so finally my new fileserver project is going airborne..

Plans for now:

Coolermaster Stacker Classic + 4in3 modules
Asus P5E WS PRO
Intel Q6600
Xigmatek cpu cooler
2x2GB PC2-6400 Corsair ram
3ware 9500S-12 PCI-X controller
12x1TB Samsung F1
Seagate 80GB SATA 2.5" 7200rpm system drive
Nexus 120mm fans
Corsair HX520 PSU

Any comments or hints?

I'm uncertain about how I'm gonna configure the raid array(s), running 12x1TB in raid5 is definently a no-go, but the 3ware 9500S-12 doesnt support RAID6, and im not sure if RAID50 is a good idea, so any insight? :)

I'm definently not expecting this rig to be quiet, or silent, or anything like that, but I'm going for atleast abit lower noise levels than with the old FS I have with 10x500GB Samsung drives..

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Re: New fileserver - 12x1TB, Stacker, Q6600, 3ware...

Post by nutball » Wed May 14, 2008 11:38 pm

Wibla wrote: I'm uncertain about how I'm gonna configure the raid array(s), running 12x1TB in raid5 is definently a no-go, but the 3ware 9500S-12 doesnt support RAID6, and im not sure if RAID50 is a good idea, so any insight? :)
You could try configuring them as two six-drive RAID5 volumes (or 6 + 5 + hot spare), then either live with them as two file-systems if that's acceptable, or use a logical volume manager to make them appear as a single volume.

You'd be losing two drives to parity, as you would in a single RAID6 volume, but as you say RAID5 over a 12TB array is a bad idea. I wouldn't like to guess how the resilience of 2 x 6TB RAID5 compares to 1 x 12TB RAID6 on a formal statistical basis though :)

Or... buy a RAID controller which does do RAID6 :) If it were me building such a thing I'd be looking at the Areca cards. These support RAID6, and also MAID (spin-down) which might be useful if access to the files on the server is sporadic.

lm
Friend of SPCR
Posts: 1251
Joined: Wed Dec 17, 2003 6:14 am
Location: Finland

Post by lm » Wed May 14, 2008 11:50 pm

How many simultaneous users will it have? Reading and serving hard drive data hardly needs a quadcore if you use it alone. Something like 2% of a single core cpu performance goes to run one drive at full speed.

gb115b
*Lifetime Patron*
Posts: 289
Joined: Tue Jun 13, 2006 12:47 am
Location: London

Post by gb115b » Thu May 15, 2008 1:49 am

i heartily recommend the areca cards...

also speedfan is hopefully going to support HD temp monitoring from them soon too!

i have to ask though with 12 drives, why no hotswap?

protellect
Posts: 312
Joined: Tue Jul 24, 2007 3:57 pm
Location: Minnesota

Post by protellect » Thu May 15, 2008 4:52 am

I also recommend the areca cards. RAID-6 is the way to go for something that big.

that Linux guy
Posts: 213
Joined: Thu Mar 20, 2008 8:51 am
Location: In the server room, playing Trackmania

Post by that Linux guy » Thu May 15, 2008 5:05 am

Man, 12tb. That's a lot of pr0n.

Seriously though, +1 for 2x RAID 6 arrays with a single LVM.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Thu May 15, 2008 6:02 am

The 3ware 9500S-12 is set, so I'll prolly go for 5x1TB and 6x1TB raid5 arrays with one hotspare.

"just" buying an areca card is out of the question right now, a decent one is easily $1k, and the waiting involved... not acceptable ;)

It's gonna run debian linux with 64bit kernel, and 3ware supports SMART from all the ports on the card, so I have full temperature monitoring with smartmontools and munin.

gb115b: I never said hotspare was out of the question, I just said that raid5 with 12 drives in one array was a no-go :)

Lliam
Posts: 104
Joined: Sun Aug 11, 2002 3:26 pm
Location: London UK

Post by Lliam » Thu May 15, 2008 10:42 am

I like what you're doing!

Forgive the question as it is for my education but is software raid (mdadm) on linux a no-go for this array size?

Would there be too much CPU overhead needed for running the array even for a quad core?
(I run the mdadm raid 1 500gb for my very modest 64bit dual core ubuntu server -slimserver mainly)

I suppose if your RAID card gets FUBAR'd you'd have to buy another similar card to get your array back. Would you be buying two cards and keep one for back-up?

Finally what are you using the server for?

Best wishes and good luck with the project...keep us posted!

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Post by nick705 » Thu May 15, 2008 12:36 pm

Wibla wrote: gb115b: I never said hotspare was out of the question, I just said that raid5 with 12 drives in one array was a no-go :)
It'll be awkward if you're using the Stacker with the stock 4-in-3 modules - when one drive dies you'll have to physically remove the whole module to replace it, which will mean taking all four drives in that module offline. You'd need to power the server down anyway, as you'll have to take the sides and front off, and you don't want to be yanking the case around with the HDDs running (unless you want to risk kissing goodbye to more drives).

If you want hot-swappability, you really need bays designed for the purpose, but I guess that would add a fair bit to the cost... maybe not that much in the overall scheme of things though.

P.S. What do you plan to do with 12TB of storage... ? :shock:

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Sat May 17, 2008 7:08 pm

Backups, HDTV for my HTPC/projector rig, prono, mp3, the usual stuff...

I know 4in3s and dead drives are a bitch, been through it, but normal hotswap cages (4in3s or 5in3s) are way too noisy, and noise IS a factor..

However, with a hotspare and one 5drive and one 6 drive raid5 array I should be good for a while, hopefully.. and with stable temps and not too much load they should last a while.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Mon May 19, 2008 1:39 am

Hrm... Im gonna keep the original 5.25" fronts and filters from the stacker, which does have some air impedance... so I'm wondering what fans that are best to get max airflow for the lowest noise in this configuration? (5.25" cover + filter -> fan -> 4in3 module)

I'm happy with the Nexus' and Scythe S-FLEX im running in my current stacker, but I'm pondering about SlipStreams instead?

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Fri May 23, 2008 9:11 am

Image

Image
Last edited by Wibla on Sat Nov 29, 2008 6:24 pm, edited 1 time in total.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Re: New fileserver - 12x1TB, Stacker, Q6600, 3ware...

Post by m^2 » Sun May 25, 2008 1:18 am

nutball wrote:
Wibla wrote:I wouldn't like to guess how the resilience of 2 x 6TB RAID5 compares to 1 x 12TB RAID6 on a formal statistical basis though :)
RAID 6 is more secure.
Both arrays survive 1 drive loss. Both die with 3+ failures, so the interesting case is when 2 drives fail.
RAID 6 survives, RAID 5 survives if dead drives are in different arrays, you got slightly over 50% for this.

Brians256
Posts: 19
Joined: Fri Oct 28, 2005 9:45 am
Location: Klamath Falls, OR

Post by Brians256 » Mon Jun 02, 2008 9:16 pm

Nice system, Wibla! It looks as if you built the system around a 3ware PCI-X card that you already had instead of buying their latest. I think the PCIe card is much faster for RAID5, not even counting the faster bus speed. The good thing is that the array should be easy to migrate to a newer card if/when you decide to upgrade. I upgraded from one generation to another quite smoothly (i.e. the array didn't even notice!).

Interesting to see how many people recommend the Areca. Is this mainly based upon the Tom's Hardware reviews about two years ago? The most recent incarnations of Areca and 3Ware support RAID6 and seem to be about the same speed.

Personally, I have a 8-drive (750GB) RAID-5 setup using the 3Ware 9650SE card. Works great to give me somewhere just about 4.5TB of formatted NTFS space (about .5TB is used by NTFS overhead).

The performance is great and the only problem I've ever had has been overheating of the battery. However, that is not 3ware's fault. It turned out that the back-side exhausts had gotten blocked, and very few servers can cool themselves without moving some air. In spite of that, the thing kept running! I never lost a bit of data.

RAID-6 would be a good thing, though. I'm a bit nervous about having only one drive as parity because all the drives are the same make/model. I'll probably mirror the array onto another server when 1TB drives drop down in price.

Real backup just isn't financially practical. How do you guys handle it? My strategy is to ignore the problem and remember that all this stuff on my server is available elsewhere (system images and media can be recreated).

P.S. The oddest thing (from what I've seen of other large-drive home users) is that this array is pron-free.

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Mon Jun 02, 2008 10:04 pm

Brians256 wrote:Personally, I have a 8-drive (750GB) RAID-5 setup using the 3Ware 9650SE card. Works great to give me somewhere just about 4.5TB of formatted NTFS space (about .5TB is used by NTFS overhead).
Half a tera for NTFS overhead? I've never heard of any volume requiring that much... I've got a 6x500GB RAID5 array on my 9650SE, and my total size comes out to exactly what calculations say it should - 2.27TiB (2.5TB).

Are you using GPT disks for that array? You'd need to if you want that 4.5TB to be on a single volume.
Brians256 wrote:RAID-6 would be a good thing, though. I'm a bit nervous about having only one drive as parity because all the drives are the same make/model. I'll probably mirror the array onto another server when 1TB drives drop down in price.
RAID-6 isn't a single drive using parity - it's dual distributed parity bits. Any two drives within the array could fail, while still maintaining data integrity. If you've got the space for it, I say giv'er. My next upgrade is likely going to be to 8x1TB in RAID5, but anything more than that and I'd go for RAID6.
Brians256 wrote:Real backup just isn't financially practical. How do you guys handle it? My strategy is to ignore the problem and remember that all this stuff on my server is available elsewhere (system images and media can be recreated).
My RAID drives are not my backup solution, but my everything-file-storage. My backups are still taken from the storage array and mirrored daily to an external 500GB drive. Of course, not all the data goes there, but all the important stuff - documents, code, pictures. Most music and video can be found/ripped again, but those other files... those are precious.

Brians256
Posts: 19
Joined: Fri Oct 28, 2005 9:45 am
Location: Klamath Falls, OR

Post by Brians256 » Sun Jun 08, 2008 7:26 am

I was guessing on the 500GB for overhead because I heard that NTFS uses 10% for overhead. Whatever the case may be, I'm getting a bit over 4.5TB of space. At least, that's what I remember. It's been a while.

I don't remember what I'm using, to be honest. It may be GPT and probably is. I started with 4 750GB drives (gotten on a good sale at Fry's) then picked up more as sales coincided with availability of cash. So, the initial "drive" was auto-carved at 2TB by the 3ware card. The second drive is 2.5TB because I turned off auto-carving. So, I think I've got 1 drive non-GPT and another one as GPT. I was ignorant at the start, oh well. This may add to the overhead (having two "drives").

RAID-6 - I know what it is. I am nervous having a single parity drive with my RAID-5 array, making me want to have more security that is available with RAID-6. However, I want the space more than I want the safety. My next upgrade could be an upgraded 3ware card (12 or 16 drives?) to allow RAID array to just transition over to a larger set of drives. Or, I could just trim back on my online media. Hmm... probably not. :D

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Sun Jun 08, 2008 11:30 am

I was guessing on the 500GB for overhead because I heard that NTFS uses 10% for overhead.
The total amount "lost" via the description of say 1TB (1000GB) and windows NTFS formatting is 7%, this is the same for all drives. A 1TB HDD gets 930GB of usable space, your 750GB drives individually gives you 697.5GB.


Andy

Brians256
Posts: 19
Joined: Fri Oct 28, 2005 9:45 am
Location: Klamath Falls, OR

Post by Brians256 » Tue Jun 10, 2008 8:06 am

Your math appears to correlate well with my memory. I'm out of town so I can't check, but the 697GB figure rings a bell. That gives me about 4.9TB for the entire array, which is very reasonable.

Now, in 8 years time, that will be on a USB-4 stick selling at Fry's for $39.95. Adjusted for inflation, of course, which means it might be $89.95, or a half-tank of gas.

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Tue Jun 10, 2008 8:52 am

The loss in available space has nothing to do with formatting - nothing. It's simply a matter of different terminologies from manufacturers and operating systems.

When you purchase a 1TB drive, the 1 terabyte indicates 10^12 bytes. Similarly, a 500GB drive is 500* 10^9 bytes. The "TB" and "GB" that Windows shows you in My Computer and Disk Management is in fact tibibytes and gibibytes, which are 2^40 and 2^30 bytes, respectively.

If you take 1GB/1GiB, you'll end up with 0.931 - that "7% loss" andyb was talking about.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Sat Nov 29, 2008 6:33 pm

Just noticed the pics werent working (had a little look in bilder.wibla.net-error.log), boo at people for not saying, if my pictures dont work, just pm me :)

This server now resides below the desk here, its a tad noisier than I'd like - the samsung F1's vibrate more than the T166's, but its definently not annoying.

Link to gallery thread

bgiddins
Posts: 175
Joined: Sun Sep 14, 2008 1:04 am
Location: Australia

Post by bgiddins » Sat Nov 29, 2008 11:43 pm

Lliam wrote:Forgive the question as it is for my education but is software raid (mdadm) on linux a no-go for this array size?
This question was asked back in May... anyone got an answer? What are some practical size upper limits on mdadm arrays?

I'm using mdadm for a RAID 1 array of 2 x 1 TB disks, I'm planning on eventually having 4 or 6 disks and going RAID 5 or RAID 10 depending on how much space I need. I didn't realise until now that RAID 6 was also an option with mdadm.

Wibla
Friend of SPCR
Posts: 779
Joined: Sun Jun 03, 2007 12:03 am
Location: Norway

Post by Wibla » Sun Nov 30, 2008 4:50 am

bgiddins wrote:
Lliam wrote:Forgive the question as it is for my education but is software raid (mdadm) on linux a no-go for this array size?
This question was asked back in May... anyone got an answer? What are some practical size upper limits on mdadm arrays?

I'm using mdadm for a RAID 1 array of 2 x 1 TB disks, I'm planning on eventually having 4 or 6 disks and going RAID 5 or RAID 10 depending on how much space I need. I didn't realise until now that RAID 6 was also an option with mdadm.
You can use a lot of drives in mdadm with no problems, but be aware of dodgy pci sata controllers. I'd also use RAID6 if possible. mdadm supports expansion of raid arrays also :)

Post Reply