Advice sought - home NAS configuration

Got a shopping cart of parts that you want opinions on? Get advice from members on your planned or existing system (or upgrade).

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Advice sought - home NAS configuration

Post by Kaizen » Thu Jul 28, 2005 1:52 am

Hi All

I want to build a machine to act as a RAID-5 home server and D/L box. I've been roling this round in my mind again and again as I was going to go down the Infrant ReadyNAS route or the EPIA path but having a full server provides so much more flexibility.

I'm trying to keep things as cheep and quiet as possible but I sadly don't have many spare parts to re-use so this is what I'm proposing:

Antec SLK3000B
Old WD drive for the OS
Samsung SP120 x 4
Highpoint Rocket 1540
Asus K8S-MX with onboard VGA
AMD Athlon 64 2800
Zalman CNPS7000B-CU
Antec Phantom 500
512Mb RAM

Adopting the mantra of quiet rathre than silent I could have gone down that path of using a Sempron processor but I've only found a motherboard with onboard VGA for the 754 processor.

You'll note I've not included fans as I'd also welcome any recommendations here but also remember that I'm in the UK so product availability is not always the same.

As always - your input is welcome.

Cheers

PS The ANTEC P180 is beautiful but I just can't justify the cost and I'm not sure it'll give me much of a performance return given my requirements.

bobo5195
Posts: 54
Joined: Thu Apr 15, 2004 2:45 pm

Post by bobo5195 » Thu Jul 28, 2005 3:42 am

you forgot the gig ethernet which is about the only thing that could boost performance of a NAS and would give a large performance improvement. For a NAS a pentium 2 with gig e would beat a a64 without.

If i were you i would consider teh cost of a server vs a £30 pc. A NAS box running even a moderately fancy linux or somethign like clarkconnect will max need a p2 class processor might even get by on pentium 90mhz. This compared to a server is cheap. A pentium 90 does not need a zalman etc, The box won't be quiet anyway as its got 4 hds in it but then you can just whack it in a cupboard or something.

RAID 5 is a bit OTT anyway. If stuff is backup then you shouldnt need raid. Raid does stop anything if the PSU blows up and takes all the drives with it or the raid card fails or someone spills beer in the computer. I have had a good old noisy as anythign wd 60gb running 24 / 7 for 2 years straight with the occasional reboot and its not a problem.

If you stioll want a high powered server a second hand athlon xp class system will do then underclock and add parts to your liking but theres no way expesive stuff like the phantom is needed as thing is never going to be quiet with 4 hds hanging around.

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Post by nick705 » Thu Jul 28, 2005 4:18 am

bobo5195 wrote:
RAID 5 is a bit OTT anyway. If stuff is backup then you shouldnt need raid....
Normally I'd agree with this 100%, but if you're on a budget, backing up hundreds of gigabytes of data on a media server becomes prohibitively expensive and/or impractical. Optical media are far too small and too slow, so unless you want to go for an expensive enterprise-type backup solution, your only other alternative is to back up to hard drives (effectively duplicating your server and doubling the cost).

RAID5 without backup isn't ideal and I certainly wouldn't recommend it for anything critical, but it could be OK for a media collection where a total loss would be painful but not disastrous (although you'd have to be the judge of that yourself of course). You're securing yourself to a good extent against drive failure, which would probably be the biggest single cause of data loss in this type of application, and you're only paying for one extra HDD (or maybe more if you want more redundancy).

I'd definitely agree that a cheap (secondhand?) PC acting as a NAS box is the way to go though if you're after saving the pennies, although with GigE you might need something with a bit more punch than a P2 to handle the traffic. Having said that, 100Mb ethernet should be more than fast enough for streaming even hi-res video, although GigE would be nicer when you're transferring files around.

Use Linux software RAID5 and you won't even need to buy a controller, although you might need to do a bit of homework to set it up properly... :wink:

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Thu Jul 28, 2005 5:13 am

I think I answered this in another thread, I used an old PC and setup gentoo with raid5 setup. relatively "easy". Actually I started with Debian, then switched to Gentoo (my raid setup survived the "switchover" - my testing to ensure that if HW not related to the disks died, I could migrate to a new linux server with no loss of data - within reason of course).

Ah, here it is:

http://forums.silentpcreview.com/viewto ... id5+debian

Some update to that thread: as mentioned I've switched to gentoo, I added another drive so I'm up to 750GB (give or take), and I modded the front fan and added a fanmate to make it "quiet"ish - found that with no fan the disks get very hot, and with fan but at 5V they still stay quite cool - amazing what a little breeze does!

Caveat: I still have problems with the power management on the K8T MB, it works intermittently (sometimes it can switch between 800Mhz - 2Ghz, sometimes it gets the dreaded pending mode stuck error and it sticks at 2GB), and unfortunately the latest Gentoo kernel doesn't help (2.6.12.r4) nor the powernow-k8 patch I found in bugzilla.kernel.org.

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Post by nick705 » Thu Jul 28, 2005 5:44 am

xarope wrote:Actually I started with Debian, then switched to Gentoo (my raid setup survived the "switchover" - my testing to ensure that if HW not related to the disks died, I could migrate to a new linux server with no loss of data - within reason of course).
I think that's another big advantage of software RAID over a hardware controller - as long as Linux can see the disk signatures, you can normally transfer the entire array to a completely different setup and it should be recognised just fine (although note my use of "normally" and "should".... :lol: )

If your hardware RAID controller dies, it's a bit of a lottery trying to recover the array unless you can find exactly the same model, chipset, drivers etc...

|Romeo|
Posts: 191
Joined: Tue Jan 18, 2005 6:36 pm
Location: UK

Post by |Romeo| » Thu Jul 28, 2005 7:17 am

nick705 wrote:
xarope wrote:Actually I started with Debian, then switched to Gentoo (my raid setup survived the "switchover" - my testing to ensure that if HW not related to the disks died, I could migrate to a new linux server with no loss of data - within reason of course).
I think that's another big advantage of software RAID over a hardware controller - as long as Linux can see the disk signatures, you can normally transfer the entire array to a completely different setup and it should be recognised just fine (although note my use of "normally" and "should".... :lol: )

If your hardware RAID controller dies, it's a bit of a lottery trying to recover the array unless you can find exactly the same model, chipset, drivers etc...
Not to mention how much a RAID controller with more than 4 ports seems to cost :shock:

Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Post by Kaizen » Thu Jul 28, 2005 8:16 am

Hi All

As always, thanks for the wealth of feedback! I'm still not sold on the use of software RAID. Compare the performance of the TerraStation with that of ReadyNAS. The former uses software RAID and the latter is hardware based.

Although the TerraStation has a GigE interface, reviews consistently mark it down on poor throughput citing the the RAID implementation as the cause.

I'm still sold on RAID 5 in hardware. It's just a bit pricey. :?

teknerd
Posts: 378
Joined: Sat Nov 13, 2004 5:33 pm

Post by teknerd » Thu Jul 28, 2005 8:42 am

The Terastation doesnt use a good implementation of software raid, hence the bad performance. However if you are using a 2800+ a64, your processor will have plenty of overhead with which to do the RAID calculations.
The other option is to get a mobo that supports RAID 5, like the Asus SLI Premium board (the promise controller onboard supports raid 5). Plus it has dual onboard gigabit ethernet, 8 sata ports, and plenty of upgrade room for the future (a dual core server in a couple years when you want to serve media to multiple rooms at the same time).
Even if you are paying 180 for the board and 150 for the chip (athlon 3000+ venice) thats less than a hardware raid card, and overall you will have better performance (because of the gigabit ethernet)
granted this board doesnt have onboard video, but you can pick up a cheap, fanless video card for 40 bucks (since you dont need powerful video)

bobo5195
Posts: 54
Joined: Thu Apr 15, 2004 2:45 pm

Post by bobo5195 » Thu Jul 28, 2005 9:07 am

since your after raid from a backup point of view as opposed to performance or instantous fault tolerance ie if everthing does not need to be backed up as you go why not just have two disks and mirror everything. If your going the cheap computer route get two cheap boxes with the same diskspace and just copy between the two. A box goes down and youve always got the other one. Cost of two boxes vs hardware raid is next to nothing, set them to backup each other at night or something by copying everything from A to B.
basic cost about £50 per old computer + hd(say 40p per mb) times by two. Silenced by putting th e boes in the attic or something.

im also assuming you want a home NAS where it just streams a file over using a network drive not something fancy like videolan doing on the fly video decompression and re-encoding. For a mounted drive your limitations are random access on the disk (user A wants one file, user b wants another) which could make a hard disk slow to a crawl or your network connection maxing out. You have about 6megabytes/sec to play with on 100Mbit ethernet which is loads for media streaming but could get maxed out if things start going HD whiz bang.

sundevil_1997
Posts: 255
Joined: Wed Apr 06, 2005 2:48 pm
Location: When it gets unbearably hot...you're there.

Post by sundevil_1997 » Thu Jul 28, 2005 9:19 am

teknerd wrote:The other option is to get a mobo that supports RAID 5, like the Asus SLI Premium board (the promise controller onboard supports raid 5).
I thought the A8N had a Silicon Image RAID controller chip. Also, I believe it's been stated that the controller can only do 0/1 RAID in hardware, and otherwise runs the RAID 5 in software....I don't know if that's true or not.

My cousin got that board, and had a RAID 5 going on it. But he had performance problems with it (sometimes the computer would freeze and the HD would go crazy for a few seconds), so he switched to a 10 RAID. Just yesterday, he had to redo the entire array because something got messed up in that. He discovered in the process that the utilities provided with the RAID controller were abysmally unhelpful. He has cursed it many a time since then....and I've decided against going that route myself.

teknerd
Posts: 378
Joined: Sat Nov 13, 2004 5:33 pm

Post by teknerd » Thu Jul 28, 2005 10:01 am

sundevil_1997 wrote:
teknerd wrote:The other option is to get a mobo that supports RAID 5, like the Asus SLI Premium board (the promise controller onboard supports raid 5).
I thought the A8N had a Silicon Image RAID controller chip. Also, I believe it's been stated that the controller can only do 0/1 RAID in hardware, and otherwise runs the RAID 5 in software....I don't know if that's true or not.
you are correct, i checked the specs and it uses a Silicon Image 3144R controller, thanks for catching that.

anyway, i believe that it uses a quasi hardware/software setup, but as i pointed out earlier, an athon 3000+ running as a server will leave plenty of overhead for the RAID calculations.

As far as problems with the controller go, i can't speak to that as i dont have that board (yet). Of course there is the chance (and likelihood) that your cousin got a bad board, but if it is a problem on another one, you can simply use the software raid built into most linux distros. The boards support for 8 SATA drives and 4 IDE's still makes it worth getting (since you wont have to add controller cards).

I would still say get the asus board for several reasons:
1) Onboard gigabity ethernet and SATA ports - if you get an older system that does not have these integrated into the chipset, they will be riding on the PCI Bus. Both Gigabit ethernet and SATA have the ability to take up basically all of the PCI bus, meaning together, you have the potential for bottlenecks and reduced performance.
2) Onboard RAID 5
3) Plenty of SATA and PATA ports
4) Upgrade room for the future (Faster procs, dual core, who knows what else AMD will introduce for Socket 939)
5) Better performance per watt than almost any other setup (outside of Pentium M but that is much more expensive and the boards for them aren't as good).
6) If you get bored of using it as a server, its still a perfectly good system for anything else
7) relatively cheap - 180 for the board, 150 for the cheap, 100 for 1GB of RAM.

sundevil_1997
Posts: 255
Joined: Wed Apr 06, 2005 2:48 pm
Location: When it gets unbearably hot...you're there.

Post by sundevil_1997 » Thu Jul 28, 2005 10:26 am

teknerd wrote: anyway, i believe that it uses a quasi hardware/software setup, but as i pointed out earlier, an athon 3000+ running as a server will leave plenty of overhead for the RAID calculations.
He has an Athlon 64 running in it (dunno what speed), and had the freeze/HD access problem when running RAID 5. When he changed to RAID10, it went away. Admittedly, this freeze happened while playing Battlefield 2, so the processor was by no means freed up. If all it was doing was file serving, it'd probably have plenty of horsepower.

Just a note also....the 8 SATA ports is correct, however I do believe that they are reserved...4 for regular use, 4 for the RAID. Just so you don't get misled about having an 8 HD RAID, or having drives C through J in windows. Anyways, I BELIEVE that's true. Maybe that matters or not.

I won't deny that it's an awesome board just due to what it comes packaged with. It's easily one of ASUS' more popular boards right now. Seems to me a bit of overkill for a file server, but the Gig E and RAID controller are a good combo. Just don't get the SLI version. :lol:

teknerd
Posts: 378
Joined: Sat Nov 13, 2004 5:33 pm

Post by teknerd » Thu Jul 28, 2005 11:30 am

sundevil_1997 wrote:
teknerd wrote: anyway, i believe that it uses a quasi hardware/software setup, but as i pointed out earlier, an athon 3000+ running as a server will leave plenty of overhead for the RAID calculations.
He has an Athlon 64 running in it (dunno what speed), and had the freeze/HD access problem when running RAID 5. When he changed to RAID10, it went away. Admittedly, this freeze happened while playing Battlefield 2, so the processor was by no means freed up. If all it was doing was file serving, it'd probably have plenty of horsepower.
Yah, also as i pointed out, it is also possible he simply had a bad board.
sundevil_1997 wrote: Just a note also....the 8 SATA ports is correct, however I do believe that they are reserved...4 for regular use, 4 for the RAID. Just so you don't get misled about having an 8 HD RAID, or having drives C through J in windows. Anyways, I BELIEVE that's true. Maybe that matters or not.
sort of.
4 of the ports are on the built in nvidia controller. They provide 3GB/s and can do RAID 0,1, or 10
the other 4 are on the silicon image controller. they can do RAID 0,1,10, or 5.
Additionally, they can also be run simply as normal SATA controllers, not RAID (so while you cant have an 8 drive array, you can have 8 single drives in windows).
sundevil_1997 wrote: I won't deny that it's an awesome board just due to what it comes packaged with. It's easily one of ASUS' more popular boards right now. Seems to me a bit of overkill for a file server, but the Gig E and RAID controller are a good combo. Just don't get the SLI version. :lol:
I'd say its still worth it for the SLI version, even if you dont use the SLI. The A8N-SLI Premium comes with the dual gigabit integrated and the extra RAID controller, something not present on the NForce4 Ultra chipset based version.

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Thu Jul 28, 2005 8:00 pm

I'd be dubious of any onboard raid though, whether raid 1 or raid 5 (corruption of data is not unheard of in the raid 1 setups), it's even harder to get a MB after a couple of years, than a new raid card, if that dies... hence my decision to go software raid 5.

I'm happy with the performance of my raid 5 array, my limitation is the 100mbps router, rather than the software XOR computation (each individual drive benchmarks at 50mbps, and the array benchmarks > 100mbps r/w), I'm even thinking of switching to the dlink 4300 as a gigabit router (since the raid5 box - k8t neo2 - has gigabit eth, as does my new MSI k8N neo4), but I'm restraining myself, since I upgrade my dlink 624 rev b (not the 624+) to firmware 1.32 (check ftp.dlink.de, yes the firmware is in English) and it's now "rock" solid, no more timeouts, slow downloads etc.

Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Post by Kaizen » Fri Jul 29, 2005 12:00 am

Hi

As usual, thanks for the stellar advice!

I'm becoming increasingly tempted to go down the software RAID route. :wink:

It was suggested that rather than RAID-5 I go for RAID-1 as it's simple and cheap. I'm not too sure about this for the storage space that I want to create. By my calculations a RAID-1 400GB partition is far more expensive than a similarly priced RAID-5 one.

Also, going down the path of the cheap computerwhich is hidden in an attic or basement is certainly the best way of doing things but it's not available to me at the moment. My home is pretty small so the server is going to be seen and heard. So a machine which is both quiet and not visually offensive will earn me brownie points.

Finally, a point was made by bobo5195 that this rig will be far from quiet due to the number of drives. I though that using SpinPoints and a decent case would help a lot. Any thoughts on this?

Thanks again.

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Fri Jul 29, 2005 12:24 am

Well, unfortunately I can't silence the raid5 rig completely, but as I mentioned I replaced the ADDA fan with a Zalman OP1 (the ADDA is an 80x20mm, so I had to find a fan which was just as small, and the OP1 is 80x15mm) undervolted with Fanmate (I kept turning it lower and lower until I could not hear it, at which point the whoosh from the two 80mm case intake fans are louder). As mentioned in a previous posting, without a fan, the drives just got way too hot for me to feel comfortable with.

In terms of the noise of the drives themselves, with the icy dock sata enclosure http://icydock.com.tw/mb018-sata.htm and the fan disconnected, I can't really hear the drives ticking over when I'm copying to/from (maybe the fan mount in front muffles it, or maybe the intake case fans are just way too whooshy?). I'd already replaced the ATI 9800 card I had spare in there (previous loudest thing), and put in a 9250 (which is passively cooled).

It's a typical SPCR vicious cycle, after I was happy with the P180, I started hearing noise from the other computers in the study.. sigh. My next to-do's are now:
- replace stock fan on barton 2500 (my wife's computer which I usually don't do too much to, in case she yells at me when she see's it's in pieces! It's on the far side of the study so I don't usually hear it, but as I said, after the P180...), maybe I'll just wimp out and fanmate it as a quick-easy solution (until she yells at me to tell me the computer is melting.. heh)
- undervolt the intake fans on my raid5 box (maybe I'll even take one out), they are silenx fans on silicon mounts, which I used to think were reasonably quiet... sigh again.
- oh, and just to show how bad this mania is, we bought a new fujitsu p7000 laptop for my wife (coz as every married man knows, the wife has to have the latest cutest laptop :-) ), and the fan in it BUZZES (grrrr I know it's hot in Singapore but come on), so I had to set it up for laptop mode even on AC (i.e. throttle as much as possible), so the fan only buzzes now when she's working on some large powerpoints with video (and I mean large). And now I'm wondering if I could mod that fan...

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Wed Aug 10, 2005 8:21 pm

as an addendum (in case anybody else has the same problem as me), I fixed the powernow problem by recompiling a new ACPI DSDT file. See http://gentoo-wiki.com/HOWTO_Fix_Common_ACPI_Problems for more details.

Since I use an MSI k8t, I could download the DSDT.asl for the 6702 and recompiled (had to change line 31 0x04 to 0x26 since there are 38 components in the Package defn, not 4).

I also had to do an echo -n "INITRDDSDT123DSDT123" not just an echo, as it could not otherwise find the DSDT module in the initrd file.

cpufreq-set -g ondemand works now, my cpu is at 800mhz until I do the cat /dev/urandom > /dev/null, at which point in time it goes to 2Ghz, then transitions back to 800Mhz when I ctrl-C the cat.

Straker
Posts: 657
Joined: Fri Jul 23, 2004 11:10 pm
Location: AB, Canada
Contact:

Post by Straker » Sat Aug 13, 2005 4:58 pm

just another voice in favor of software RAID.

if you're doing this for a separate cheapo server, there's really no reason to drop $400+ on a decent card. only reasons I can think of for buying a hardware controller are for either a badass workstation (not that the checksumming is particularly taxing for any modern CPU, I just wouldn't be comfortable with sw here), or for a server with multiple gigabit NICs and PCI-X with like a 10-drive array.

i know most consumers using RAID 5 don't really use it for performance, but it still offers the best combined read/write performance of anything but RAID 0, so remember ethernet/PCI will almost always be the bottleneck. that's the one nice thing about the new onboard RAID 5 controllers. I believe AnandTech or SR have already done reviews/benchmarks on these if you wanna check. if it were me I'd at least consider onboard RAID 5 - you could just buy a spare motherboard and still save tons of money compared to PCI-X.

VERiON
Posts: 233
Joined: Wed Aug 11, 2004 5:42 am
Location: EU

Post by VERiON » Sun Aug 14, 2005 3:59 pm

I think even the "slowest" p2-p3 cpu software raid (linux) solution will be sufficient for home/small business.

For me - the biggest advantage of RAID is that you can get really big continuous disk space. Using RAID-5 can give you extra protection from ONE drive failure (but remember - you are not save without regular backups anyway).

2 advices:

1.
Buy a BEST psu you can get, to minimize chances that your all RAID drives die when your psu dies.

2.
Don't forget to buy good UPS with usb/serial connection to the server and with the server side software (watchdog) to shut down your server in critical situations (i.e. no power for long time) - it is crucial for raid servers to empty write cache and shut down properly.

It is better to spend a money on good PSU and UPS and use software RAID than buy an exensive hardware RAID card and live without UPS

Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Post by Kaizen » Wed Aug 17, 2005 1:50 am

Hi All

Okay, so I'm going to go ahead with building the NAS system rather than spring for the Terastation or Infrant options purely for functionality gains that I can get with a full server implementation and use software RAID.

So, cost is pretty important as I'm trying to get as close as possible to what I'd be charged for the pre-packaged solutions, with better functionality and quiet operation. It's not like I'm asking for too much. :lol:

So, this is my proposed bill of materials:
Motherboard: Asus K8N nForce3
Processor: AMD Athlon 64 3000
Memeory: Corsair 512MB DDR PC3200
Graphics: XFX GeForce 6200 128MB (AGP) - passively cooled
Storage: Samsung SpinPoint P120 250GB SATA
CPU Cooler: Zalman CNPS7000B-CU CPU Cooler
PSU: Seasonic S12-430
Case: Antec SLK3000B

I have an exiting Plextor optical drive and that's pretty much it.

So, any further thoughts or any suggested improvements would be welcomed.

Thanks again

Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Post by Kaizen » Thu Aug 18, 2005 1:53 am

Hi

Sorry - one other really dumb question. My intention is to use only SATA drives, but is an IDE drive still required for the OS?

Cheers

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Post by nick705 » Thu Aug 18, 2005 3:06 am

I don't think there's any way of installing an OS on a software RAID5 array (I could be wrong). You could install Linux on a separate small partition on one of the disks and boot from that, but it seems a bit messy...better to have a separate drive for the OS, so the array can be completely given over to data.

AFAIK the K8N mobo you're considering only has two SATA connectors incidentally, so a purely SATA RAID5 setup wouldn't be an option if that's your intention. If I were you I'd go for a basic mobo/CPU combination with four SATA connectors (an A64 system is overkill for this purpose anyway), set up the 4x Spinpoints as a RAID5 array for your data, and use a separate PATA drive for the OS. The boot drive needn't be anything spectacular...you could for example use one of those old 40GB single platter Seagate 7200.7s which are still available for around thirty quid in the UK and are as quiet as a church mouse...

Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Post by Kaizen » Thu Aug 18, 2005 6:09 am

Hi

Yes, you're completely right about the number of SATA ports. :oops:

My intention is to use 4 SATA drives in a raid configuration as you've described. Probably best to keep the OS on a separate drive but something I wasn't sure about so thanks for that!

What CPU / mobo combination would you recommend as an alternative?

Thanks again!

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Thu Aug 18, 2005 6:39 am

nick705 wrote:I don't think there's any way of installing an OS on a software RAID5 array (I could be wrong). You could install Linux on a separate small partition on one of the disks and boot from that, but it seems a bit messy...better to have a separate drive for the OS, so the array can be completely given over to data.
For the most part, you can't boot off of software RAID5, but you CAN boot off of software RAID1. The actual size of the boot partition can be very tiny; the bulk of the OS can be on the RAID5.

The usual practice, I think, is to partition each disk into one very small partition and the rest is for data. The small partitions are put in RAID1 for /boot, while the large partitions are put in RAID5 for everything else.

Personally, I like to have a somewhat larger "boot" partition for the entire OS. With Debian, I can have a full graphical environment OS install comfortably fit within 2.5gigs (with LOTS of room to spare), so it really doesn't eat into the overal data partition capacity very much.

I don't boot to RAID1, but rather just make a spare copy of my OS partition. It's only slightly more work to recover from in case of primary boot drive failure, and it protects against "I fubar'd my OS" mistakes. (RAID1 will mirror any "oops" to every copy instantly.)

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Thu Aug 18, 2005 4:09 pm

the way linux sw raid works, the array is first created in degraded mode, then rebuilt with the extra drive(s), apparently for performance reasons.

so people have "exploited" this by creating a degraded raid1 array (i.e. one drive), switch their boot drive to it, then after reboot rebuild this array and oui la, boot disk on raid 1.

personally I haven't tried this yet, but there are a lot of very well documented links on the web, e.g. a pretty recent one:

http://www.somedec.com/downloads/howto- ... raid1.html

I've also seen other how-tos where someone has built a 4 disk setup, with 2 small partitions for raid1 (for boot) + two small partitions for raid1 (for swap) + 4 partitions for raid5 (for the rest) to maximise space usage, i.e. minimal boot on raid1, plus say a 1 gb raid 1 swap, then raid 5 on the rest of the space, rather than having 2x40(or 80)GB drives completely committed to boot+swap.

Kaizen
Posts: 35
Joined: Sat Nov 08, 2003 1:17 am
Location: USA

Post by Kaizen » Thu Aug 18, 2005 11:31 pm

Hi All

Thanks again for the excellent advice.

Any final thoughts on the CPU / mobo combination before I part with my hard-earned money? :)

Cheers

Post Reply