A quiet but affordable 1.5 Terabyte home file server

Got a shopping cart of parts that you want opinions on? Get advice from members on your planned or existing system (or upgrade).

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
stevenblick
Posts: 2
Joined: Sun Mar 19, 2006 11:20 am

A quiet but affordable 1.5 Terabyte home file server

Post by stevenblick » Sun Mar 19, 2006 12:43 pm

Ok, So here's the deal. I want to make a Home server for Media files (Photos, Music, Movies, Recorded TV Shows, etc).

Now I need about 1500 GB of space.

I was thinking of 6 x 250GB SATA II 3Gb/s NCQ 7200RPM 8MB Cache Hard Drives.

Cases I was thinking about Antec P180 or something else with as many internal 3.5" spaces.

I don't need a great video card (This is going to be used just for files and internet connection. I have another computer for gaming, home media stuff, etc. I just need the space) so was thinking about a motherboard with on board audio and video.

I want to get RAID 5 set up as it seems the safest, most reliable, most space efficient way to set this up.

Don't know about anything else so whatever ideas you guys have would be awesome. I could be wrong about any of the above so help me out. Thanks guys.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Re: A quiet but affordable 1.5 Terabyte home file server

Post by Beyonder » Mon Mar 20, 2006 3:22 pm

stevenblick wrote:Ok, So here's the deal. I want to make a Home server for Media files (Photos, Music, Movies, Recorded TV Shows, etc).

Now I need about 1500 GB of space.

I was thinking of 6 x 250GB SATA II 3Gb/s NCQ 7200RPM 8MB Cache Hard Drives.

Cases I was thinking about Antec P180 or something else with as many internal 3.5" spaces.

I don't need a great video card (This is going to be used just for files and internet connection. I have another computer for gaming, home media stuff, etc. I just need the space) so was thinking about a motherboard with on board audio and video.

I want to get RAID 5 set up as it seems the safest, most reliable, most space efficient way to set this up.

Don't know about anything else so whatever ideas you guys have would be awesome. I could be wrong about any of the above so help me out. Thanks guys.
A few things:

1. RAID-5: definitely. For that many drives, and that much disk space, it's probably worthwhile to drop cash on a nice RAID card. I'd consult storagereview.com for that sort of information. You could reduce the need for the card by going with larger drives (400-500GB), but I bet the final cost would be roughly the same since 250GB is the sweet spot for disk space right now.

2. Antec P180: probably not a great case for this task. If the computer is going to be doubling as a workstation, then the Antec fits the bill. Otherwise, there are cheaper/better options.

I'd opt for a good socket 754 motherboard with PCI-e (x16 and x1 slots), onboard video, onboard gigabit ethernet, onboard sound, and the cheapest sempron you can find (no point in getting a fast processor, since you're going to be limited by the network). I'd also get a single 1GB stick of RAM, since memory bandwidth isn't really going to be your bottleneck here.

lenny
Patron of SPCR
Posts: 1642
Joined: Wed May 28, 2003 10:50 am
Location: Somewhere out there

Post by lenny » Mon Mar 20, 2006 4:11 pm

The CM Stacker seems to be a good choice if you need loads of HDD space. Get an extra HDD cage. It may not be the quietest case around, but if it's just a server you're going to hide in a closet this should not make that much of a difference.

If you want a larger case that may be easier to mod for less noise, take a look at the Yeong Yang cube server case.

darkdays
Posts: 4
Joined: Mon Mar 20, 2006 11:12 pm

Post by darkdays » Mon Mar 20, 2006 11:41 pm

I don't see any need for SATA II, as the throughput limit for harddrives has been the drives themselves and not the interface for quite some time now, and anyway you're going to use it as a fileserver so don't bother your limits will be network interface and other stuff which I can't think of this early in the morning, do a simple equation gb's / $'s and the lowest drive wins. But that might turn out to be SATA II, I'm not very updated on the latest most amount of storage for the least amount of money.

What OS will you be running?

If I was to put up a 1.5 TB fileserver I'd buy the cheapest disks I could find, all matching in size, then fill up all on-board diskcontrollers both SATA and IDE if that's the case on your mobo, and then Linux or BSD with software RAID and samba.
Haven't looked into it recently but I think most decently priced RAID capable diskcontrollers don't support raid 5 and even then would just support up to 4 drives or so, but this might have changed.

Also if this is the way you choose, play plenty with the software RAID so you know what to do when a drive goes down, it's a breeze if you've done it before but the first time it's easy to panic and accidentally kill the whole thing. That happened to a friend running a 1.5 TB fileserver six months ago, he's still trying to fill it up again. :-D


And like someone said, put it in some closet or storage room, this amount of drives doing seeks simultaneously will unevitably make a LOT of noise.

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Mon Mar 20, 2006 11:46 pm

Personally, I'd use 5@400 GB instead of 7@250 GB. Specifically, a pile of Western Digital WD4000YR server drives.

The cost isn't quite the same; a 400 GB is $0.50/gigabyte, and 250 GB is $0.40/gigabyte.

There are two good reasons:

1. 5 drives offer 30% greater reliability than 7.

Because RAID-5 can only recover from a single drive failure, the more you use, the greater the possibility a second drive will fail while the first is rebuilding.

2. SATA RAID-5 cards in the less obscene price brackets tend to come with 4 or 6 ports.

You need 5 or 7 because the parity information for RAID 5 takes up the equivalent of one additional drive. The 6 port card is $300. If you want more, it'll cost double that.

EDIT:

I would not use software RAID.

A number of professional outfits consider the two equivalent from the reliability perspective, but that doesn't apply to weedy Windows 'dynamic disk' configurations. There are just too many things that can and do go wrong, usually at the least opportune time.

It takes a lot more vigilance to properly setup a software RAID array. If you're even remotely new to this flavor of RAID, buy the card.

DI

darkdays
Posts: 4
Joined: Mon Mar 20, 2006 11:12 pm

Post by darkdays » Tue Mar 21, 2006 12:29 am

but that doesn't apply to weedy Windows 'dynamic disk' configurations.
Exactly why I said Linux/BSD for that one, wouldn't trust Windows with a single byte of important stuff. :-D
It takes a lot more vigilance to properly setup a software RAID array. If you're even remotely new to this flavor of RAID, buy the card.
Sure, but if I'd prefer to put those 300$ on storage.
Sure if he's new to*nix or BSD then it will be quite time consuming to set things up, but once you have you've got as good a RAID as hardware, and whenever he feel's like it he can buy another diskcontroller and set up an additional one in the same case, or build a new huge one with additional disks if theyre matching in size.

With the 6 disk 300$ card he's stuck, to get more storage he need's to buy yet another card. and the next time another, and so on, software raid can handle just about as much disks as you can physically accomodate, and when you run out of connectors then just add another cheap diskcontroller.

If money's no issue, go buy a dedicated raidcabinet and just plug in power & network.

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Tue Mar 21, 2006 12:59 am

Given that the storage cost is already about $1000 USD, the additional cost of the card seems inconsequential.

My reservations largely concern software familiarity. Data is too important to trust to unreliable software (Windows), or software with idiosyncrasies in function or GUI (*nix) that you tend to discover only after something nasty has happened. Playing with it in advance is a start, but I question if it's worth the learning curve.

I'll concede the superior flexibility of software, although I'd probably just build a second array if I ran out of space.

DI

darkdays
Posts: 4
Joined: Mon Mar 20, 2006 11:12 pm

Post by darkdays » Tue Mar 21, 2006 1:44 am

or software with idiosyncrasies in function or GUI (*nix) that you tend to discover only after something nasty has happened.
Agree on that one.. :-D

The problem with *nix is the absurd desire to tweak that unevitably shows up a couple of days after installing it for the first time, the biggest problem running a *nix system is to give the different functions and services alone once they're working smoothly.
The dists should come with a warning, "Perfectionists stay away, you'll spend the rest of your life trying to make things even better." :-P


stenevblick: Basically if you've got the time and low price is crucial, then look at a *nix solution. It will take weeks and weeks to learn, but once you get the hang of it you'll be able to do a lot of cool stuff and soon you'll find out that you'll want that thing doing a hell of a lot more than serving files. And you get a extremly flexible raid for 0$.

arrikhan
Posts: 79
Joined: Thu Dec 29, 2005 3:51 am
Location: Australia

Post by arrikhan » Tue Mar 21, 2006 3:48 am

Beyonder mentioned storagereview.com ..

I'm interested in setting up a 4 x 250GB Raid 5 setup and came across this discussion in the forums that might be useful.

I'm leaning towards the software RAID under linux since I don't want to invest in proprietary controlller cards that become more of a single point of failure than what you remove by RAIDing to begin with. I'm laying bets that the MTBF on a controller card is shorter than HD's themselves these days :)

Secondly, this is purely for Storage. I'm not looking for speed here and I plan to run it on the slowest Pentium that can hack it!

Arrikhan

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Tue Mar 21, 2006 5:30 am

KnightRT wrote:Given that the storage cost is already about $1000 USD, the additional cost of the card seems inconsequential.
"THE" card?

If you're going to use hardware RAID, buy at least one spare RAID card. Otherwise, you've got a single point of failure--lose the card, and you lose your data. When you suffer the failure, you may or may not be able to acquire the exact identical model of RAID card, and without that you may or may not be able to access your data.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Tue Mar 21, 2006 9:05 am

Exactly why I said Linux/BSD for that one, wouldn't trust Windows with a single byte of important stuff.
At the risk of hijacking the thread, this is complete rubbish. Businesses use windows OSs as servers all the time, and they work well, and safely--I've run a Windows server for several years now (with the exception that mine is a web, database, mail, ftp, and file server), with uptimes exceeding eight months. For a basic home file server, either OS will suffice.

Considering all factors, file servers are pretty brain-dead. Nowhere near as complicated as setting up something like a database server, where storage and performance are critical issues.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Tue Mar 21, 2006 9:26 am

Secondly, this is purely for Storage. I'm not looking for speed here and I plan to run it on the slowest Pentium that can hack it!
The bottleneck with a slow pentium is going to be the outdated motherboard. I found that out quite quickly when I tried to add a raid card to an old P2-400 running an Intel 440BX motherboard. Those aging motherboards simply don't have the IO to function as a modern file server (PCI bus being a particularly limiting factor), especially when dealing with large media files. Additionally, gigabit ethernet brings old chips to their knees.

YMMV, of course.

I think the best current solution for a basic file server is:

1. Gigabit Ethernet (preferably with jumbo frame support)
2. PCI Express

...with that combo, you'll get good life out of a file server. Other than that, a 754 Sempron paired with 512MB of RAM would probably work out fine. Or if Intel is the flavor of the day, a Celeron would work, so long as the motherboard supported PCI-e and (preferably) onboard gigabit ethernet.

quikkie
Posts: 235
Joined: Tue Sep 20, 2005 5:21 am
Location: Soham, UK

Post by quikkie » Tue Mar 21, 2006 11:27 am

regarding OS choice, go with what you know and are comfortable with. Why? less faf / headaches trying to get stuff to work.
As for uptimes, I admin several machines running IPSO (Nokias rebadged version of freebsd) that have 500+ days uptime and they are running upto date versions of FireWall-1.
My windows box has had to reboot a couple of times this month so that the patch takes effect (yes I'm aware I didn't have to reboot but I got nagged to do so).

As for needing gigabit network interfaces and jumbo frame support, I'd only consider that necessary if you had more than a couple of machine that actually had gigabit interfaces and planned on moving several hundreds of MB between machines, otherwise what's the point. 12MB/s (i.e. 100FDX) is more than enough to run 4 streaming videos, a couple of file transfers and VoIP sessions without a hitch on a switched network.
I haven't looked to see how much a gigabit switch and network cards cost or the cable but that's another cost to bear in mind.
I must really be behind the times to think that 100 full duplex is actually sufficient at home. Hell, I know of a very large international insurance company that still runs 16Mbit token ring on it's lan.

stevenblick
Posts: 2
Joined: Sun Mar 19, 2006 11:20 am

Software RAID vs. Hardware RAID and OS Choice

Post by stevenblick » Tue Mar 21, 2006 12:31 pm

Ok. As I have never previously used RAID I am really unsure wether to use software or hardware RAID? Everyone seems to be a little bit divided on this matter.

Is there a big difference. Which is riskier for data loss? Any other issues I need to consider when deciding on RAID?

Also as far as OS I have never used linux so not really sure wether this would be an issue. I am sure I could figure it out.

Would it be a real problem using a windows OS? Which is the best Windows OS if i were to go that way?

If i used Linux would it be an issue with other windows machines on the network?

Thanks again for the help guys

- Steve

teknerd
Posts: 378
Joined: Sat Nov 13, 2004 5:33 pm

Re: Software RAID vs. Hardware RAID and OS Choice

Post by teknerd » Tue Mar 21, 2006 12:44 pm

stevenblick wrote:Ok. As I have never previously used RAID I am really unsure wether to use software or hardware RAID? Everyone seems to be a little bit divided on this matter.
Is there a big difference. Which is riskier for data loss? Any other issues I need to consider when deciding on RAID?
If you can afford it go for the hardware RAID, it gives better performance and better reliability (since it isnt dependant on the OS). If the RAID card fails you can simply by a new one of the same model and it will automatically restore the array.
stevenblick wrote: Also as far as OS I have never used linux so not really sure wether this would be an issue. I am sure I could figure it out.

Would it be a real problem using a windows OS? Which is the best Windows OS if i were to go that way?

If i used Linux would it be an issue with other windows machines on the network?
Linux will work fine with windows machines on the rest of the network (it emulates windows file sharing by using a SAMBA server). Nevertheless I'd say go with windows, especially if you aren't familiar with Linux.
Everyone who says that you can't run a stable server on Windows is making generalizations or being a Fanboy. Plenty of large enterprises use windows servers and i have run numerous home servers with absolutely no problem. For total stability i'd recommend using something from the Windows 2000 Family.

Ultimately I'd recommend a Setup something like this:
CPU: Athlon 3000+ Venice (more than enough hp for basic stuff and can easily be upgraded to dual core if needed.
Mobo: Any good socket 939 mobo with dual gigabit ethernet and a pci-e x4 slot (the Asus A8n-Sli premium comes to mind)
RAM: 1GB of whatever is good (can be upgraded later)
Video Card: something cheap (i'm guessing you'll be running this system headless and use remote administration)
RAID Card: Check out reviews and find a PCI-E x4 card that looks good
Hard Drives: the Western Digital 400YR that were recommended earlier. 5 of them.

Beyonder
Posts: 757
Joined: Wed Sep 11, 2002 11:56 pm
Location: EARTH.

Post by Beyonder » Tue Mar 21, 2006 1:51 pm

Gigabit onboard nics are standard fare these days, and I didn't say you "need" jumbo frames; it's a good idea, but definitely not a requirement.

When dealing with 200 GB of video (say, moving it from one computer to another, which doesn't seem like a big deal until you actually have to do it), 100 mbit is going to take 6+ hours to complete. Even a crappy gigabit network is going to halve that time. This guy is talking about 1.5 TB; IMO, it'd be nuts to go with 100 Mbit lan, especially given that the price difference isn't really substantial (onboard nic: free, switch: slightly more).

ak
Posts: 34
Joined: Mon Jul 12, 2004 5:16 am

Post by ak » Tue Mar 21, 2006 2:54 pm

I have just purchased most of the bits for my home server. I have gone against popular opinion about case and gone for the Akasa Eclipse 62 - simply because it offers good cooling (120mm fans), good space (about 7-8 hdd slots + 5 5 1/2 bays) and size (reviews indicate good space internally to work with). Hope I can make it suitably quiet.

I have gone for 3 x 400GB (SATA) drives initially and will add more as money permits. I intend to attach these using SATA via a Promise SATA300 TX4 card. The OS will be on an IDE channel. I can't justify a full blown real raid card (one that does the XOR'ing and calcs) and I intend to use software raid via Linux. Also from what I understand, if your raid card goes, you will need the same raid card to be able to access your data where as with software raid, its the software configuration so in theory you can replace your connectivity to the hdd and still access your raid array.

If you want cheap, I don't know how you can use Windows Server as thats pretty expensive in itself. In the UK its in the region of a few hundred GBP. Stability wise - Windows is stable enough for the enterprise, I have consulted at various investment banks and although most have used a flavour of unix for their enterprise apps, some do use Windows based servers. I have chosen linux myself simply from a cost reason and resource reason. Linux runs better on less hardware. Those running multiple Windows Server at home must be spending a fortune on license fees.

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Tue Mar 21, 2006 3:58 pm

ak wrote:I have gone against popular opinion about case and gone for the Akasa Eclipse 62 - simply because it offers good cooling (120mm fans), good space (about 7-8 hdd slots + 5 5 1/2 bays) and size (reviews indicate good space internally to work with).
D'oh, I had hoped to contribute to this thread before you spent money, but for what it's worth... I recommend the Chenbro SR107 case for a home fileserver. I have the SR10769 in black. If you search the General Gallery of this site, you'll find an individual who actually built two machines using this case.
ak wrote:Hope I can make it suitably quiet.
That issue was suprisingly given little attention in this thread! :)

Fortunately, I have the luxury of putting my fileserver in the basement. That Chenbro case comes with three high-speed 120mm fans. I also threw in two ultra-high-speed 92mm fans, plus the CPU and PSU fans. You can't mention "quiet" and this beast in the same sentence.

Anyway, the reason the SR107 is so nice is that is well-designed for airflow. There are 120mm fans behind each of the two four-drive compartments, and space for 92mm in front of each drive compartment. I run my fans at full speed (because the machine is in the basement), but you could easily lower the voltage of the fans and have a reasonably cool system at non-offensive volume levels.

You mentioned reliability several times in your posts, and I sympathize with that obsession. The fact is, all hard drives eventually die. The strategy is to have them die long after they become obsolete! :)

I don't have hard data, but a lot of second-hand knowledge says that cooling a hard drive only five degrees will increase its life expectancy dramatically. Again, another reason why the basement is nice. The hottest drive in my server is only running at 19 degrees C.
ak wrote:I have gone for 3 x 400GB (SATA) drives initially and will add more as money permits. I intend to attach these using SATA via a Promise SATA300 TX4 card.
This is basically what I did. Which make and model of drive did you choose? I went with the Western Digital "RE2" series (newegg link), which are supposed to be "enterprise class" and built for 24/7 use, etc etc. All their marketing makes them sound like they were specifically designed for the kind of application you and I want them for.

So far the WD drives have been fine (but I've only had this array for a couple weeks). If you're not a WD fan, the other drive I would consider for this job is Seagate's NL35 series drive (newegg link). They're significantly more expensive, though. (But that means they're better, right? :))
ak wrote:I can't justify a full blown real raid card (one that does the XOR'ing and calcs) and I intend to use software raid via Linux. Also from what I understand, if your raid card goes, you will need the same raid card to be able to access your data where as with software raid, its the software configuration so in theory you can replace your connectivity to the hdd and still access your raid array.
Something I forgot about when I bought my drives: those who are truly paranoid about drive reliability buy each drive from a different vendor. Why? The idea is that if you get a bad drive, chances are there's a whole batch that will be bad. So by buying from different vendors (and possibly staggering the time period over which you buy), you increase your likelihood of getting drives from different batches.

Keep in mind that RAID5 is no substitute for backups. It will protect against single drive failures, but if two drives fail, it's all for naught!

But then again, reliably backing up terrabyte-plus systems costs as much or more than the system itself!

I'm using Linux "md" for my RAID array as well. The true hardware RAID cards seem to run about $300 US (for a four-port SATA card) up to $500 (for an eight port). The other thing about them is that every one I've seen uses the 64-bit PCI interface. These are cards that are designed for "true" server hardware. (Granted, most, if not all, can be run at half-speed in a 32-bit PCI slot, but now you're really starting to overpay.)

Having said that, I'd still like to have a hardware RAID card! I just can't justify the cost.
ak wrote:I have chosen linux myself simply from a cost reason and resource reason. Linux runs better on less hardware. Those running multiple Windows Server at home must be spending a fortune on license fees.
People that run Windows Server at home probably pirated it :)

Getting familiar with Linux gets easier every day. And the Linux md tool (mdadm) is almost laughably easy to use. It took no more than 30 minutes for me to read the mdadm documentation and create the RAID array (now actually building the array took longer, but that doesn't require any human interaction).

Hope some of this helps!

One last thing, for those of you familiar with Unix's "df" command, here's the relevant part of my "df -h" output:

Code: Select all

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0              1.1T  279G  839G  25% /video
Regards,
Matt

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Post by nick705 » Tue Mar 21, 2006 4:37 pm

matt_garman wrote: Keep in mind that RAID5 is no substitute for backups. It will protect against single drive failures, but if two drives fail, it's all for naught!

But then again, reliably backing up terrabyte-plus systems costs as much or more than the system itself!
hmm... you'd have to be pretty unlucky for two or more drives to fail simultaneously. Hopefully you'd be alerted to the failure of a single drive, and you'd be able to replace it and rebuild the array before the laws of averages caught up with you and another drive went pop.

I absolutely agree that RAID5 (or RAID-anything) is no substitute for proper backups, but as you rightly point out, securely backing up terabyte-plus systems is horribly expensive whichever method you use, so with that much data it's probably the most practical option for home users to provide at least some degree of security. I suppose media files can mostly be re-ripped or... errm.. "re-acquired" somehow if the worst happens...

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Tue Mar 21, 2006 6:13 pm

nick705 wrote:hmm... you'd have to be pretty unlucky for two or more drives to fail simultaneously. Hopefully you'd be alerted to the failure of a single drive, and you'd be able to replace it and rebuild the array before the laws of averages caught up with you and another drive went pop.
Indeed. After posting that, I realized I forgot to mention another incredibly useful tool: smartd. It's part of the smartmontools package. It's basically a small application that constantly checks your drives' SMART attributes. You can define actions for certain events, from emailing yourself all the way to running user-defined programs. (And like many Linux/open source applications, it's totally configurable and can probably do just about anything you can think of. I haven't taken the time to fully get my smartd completely configured and all that, but it's on my todo list.)

So, fortunately, most modern drives support SMART (self-monitoring, analysis and reporting). The intent is that a drive should be able to detect when something bad is about to happen.

So, yeah, between RAID5 and proper use of smartd, I agree, you would have to be really unlucky to have two drives die at the same time! (I just hope Murphy isn't reading this! :))

Erssa
Posts: 1421
Joined: Sat Mar 12, 2005 9:26 pm
Location: Finland

Post by Erssa » Tue Mar 21, 2006 9:10 pm

KnightRT wrote:Western Digital WD4000YR server drives.

The cost isn't quite the same; a 400 GB is $0.50/gigabyte, and 250 GB is $0.40/gigabyte.

There are two good reasons:

1. 5 drives offer 30% greater reliability than 7.

Because RAID-5 can only recover from a single drive failure, the more you use, the greater the possibility a second drive will fail while the first is rebuilding.

2. SATA RAID-5 cards in the less obscene price brackets tend to come with 4 or 6 ports.

You need 5 or 7 because the parity information for RAID 5 takes up the equivalent of one additional drive. The 6 port card is $300. If you want more, it'll cost double that.
Where is that 30% coming from? If you know something about probablity counting 30% is not correct number. We are talking about the increased risk of second drive braking from remaining 3 vs remaining 5, while the array is building. You cannot just compare 4 to 6 and make a conclusion that since it has 1/3 more hard drives, it has 30% worse reliability. These things should be counted with some kind of binary relations formula (I don't know the correct mathematical term in English) and because the risk for 1 drive to fail is low to begin with, the actual difference between the reliability would be pretty small.

What you said about the raid-5 controller card is correct. More ports = more price.
But pricing for HDs, there's some difference. Prices from newegg.

400Gb WD 195$ x5 (because of parity data) = 975$
vs
250Gb Samsung 90$ x8 = 720$.

Now you would still build (effectively) a 1.5Gb array with the Samsungs and still have 1 extra drive incase one broke up, so you can immediately replace a broken drive and still end up 250$ cheaper. Spend it on the better controller.

Or if 1.2gb of space is enough the math would go.

4x 195$ = 780$ vs 7x 90$ = 630$ you get 1 more extra samsung drive incase one broke up. And end up 150$ cheaper, before a controller is bought. Even, if you picked the 6x port 300$ raid-5 controller for the samsungs, you would end up cheaper and with a spare drive, since the cheapest 4-port controller is 224$ in newegg.

If the OP would go with software raid, it's clear which one is cheaper alternative.

But now the true question is, would 7 Samsung P120 21dBA/1m drives be more quiet then 5 WDs 25dbA/1m drives, or would 6 Samsungs be more quiet then 4 WDs... A question I can't answer. Actually I think you need empirical evidence to know it.

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Tue Mar 21, 2006 9:48 pm

Nick:

The likelihood of two failing at once is greater than you would think, particularly if you were to buy all the drives at once from the same place. Wikipedia calls this the 'bathtub effect'; the gist is that drives built sequentially often fail around the same time.

Erssa:

The probability calculation was an estimate; my point was that the failure rate increases in proportion to the number of the drives. I suspect it's actually worse than that in correlation with the MTBF figures. WD rates the 4000YR at 1.2e6 hours. The consumer 250 GB drive is about half that.

Ideally it'd be a RAID 1 array with two 1.5 TB drives.

DI

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Post by nick705 » Wed Mar 22, 2006 1:39 am

KnightRT wrote:Nick:
The likelihood of two failing at once is greater than you would think, particularly if you were to buy all the drives at once from the same place. Wikipedia calls this the 'bathtub effect'; the gist is that drives built sequentially often fail around the same time.
Well...yes, Matt did in fact mention this above. I wasn't claiming that the probability is zero, and you could minimise the risk by staggering the purchases as he suggested, to help deal with any QC glitches or the possibility that some halfwit dockside forklift driver dropped the pallet containing all your drives. Even that wouldn't protect you though from something like a voltage spike killing all your drives, a fire burning down your house, an asteroid impact wiping out a hemisphere...

The point I was really making is that it's a cost/benefit calculation like everything else... if your server contained all your business data and you'd go bankrupt if you lost it, obviously you'd take whatever measures necessary to make sure it was secured. Enterprise-level backup solutions aren't really an option for home users though, at least with a terabyte or more of data. RAID5 clearly isn't perfect, but it's a lot better than nothing - granted, RAID1 is more secure, but if you have 1TB of storage space in RAID1, you might as well go the whole nine yards and build a completely independent additional server for backup purposes, at comparatively little extra cost.

We're "only" talking about a media collection here, which is ultimately replaceable - I think it's possible sometimes to lose a sense of perspective, and if you get too neurotic about the probability of losing data, you might also lose some of the pleasure you presumably gain from having the collection in the first place. It's obviously a very subjective thing though, and I suppose you just need to do whatever you feel most comfortable with...

JJ
Posts: 233
Joined: Sat Jul 10, 2004 12:24 pm
Location: US

Re: A quiet but affordable 1.5 Terabyte home file server

Post by JJ » Wed Mar 22, 2006 3:26 pm

Consider an NAS instead of rolling your own. Check out Infrant and Thecus, both of whom have four-drive NASs. Thecus is also coming out with a five drive NAS. Thecus NASs are based on a storage platform that Intel is offering and I think we'll soon see a lot more consumer storage offerings built on this platform.

Aim for the price/capacity sweet spot when buying hard drives. Right now that's 300 and 320 GB drives. I have four of the Western Digital WD3200JD drives in an Infrant NAS and I've been surprised at how quiet and how cool they run. Four 320 GB drives in RAID 5 will give me about 872 GB of storage.

Whatever you decide, this isn't likely to be quiet system and I think you'd spend a lot of wasted money and effort trying to make it such. Multiple drives can be noisy. More importanly, they require adequate cooling, or they die an early death. There's just no getting around this.

sonofdbn
Posts: 62
Joined: Tue Aug 12, 2003 8:57 pm

unRAID

Post by sonofdbn » Fri Mar 24, 2006 4:05 am

It's worth taking a look at unRaid, which offers these features:
- Storage = N drives + 1 parity
- Drives can be of different sizes (subject to parity drive being at least equal in size to the biggest drive)
- If one drive fails, you can rebuild
- If two drives fail, data on the other drives is still accessible (though in Reiserfs)
- You can easily add drives to the array

Downside:
- no security (though new release might have it)
- no spanning across drives
- needs gigabit Ethernet for best results
- currently supports only PATA drives (new release should support SATA)

I'm happily using one of these systems. You don't have to buy the whole caboodle on the front page (I went with the starter kit). There's a thread on it in the HTPC forum in avsforum.

lenny
Patron of SPCR
Posts: 1642
Joined: Wed May 28, 2003 10:50 am
Location: Somewhere out there

Re: unRAID

Post by lenny » Fri Mar 24, 2006 10:23 pm

sonofdbn wrote:It's worth taking a look at unRaid, which offers these features:
That's the CM Stacker case! :-)

Interesting product. Should be sufficient for 2 simultaneous users, built using pretty low end parts. Wonder if the software is open source...

Post Reply