2TB storage in RAID 5 mode...

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

LAThierry
Posts: 95
Joined: Tue Nov 14, 2006 4:15 pm
Location: Los Angeles, California

Post by LAThierry » Wed Dec 05, 2007 1:57 pm

My somewhat rhetorical complaint was directed more at those who answered than the original poster. As you mentioned, eventually the original poster did help as the discussion grow in the right direction. I just wished it happened sooner in the conversation for everyone's sake. There should be some kind of automated forum reply or sticky whenever someone brings up a RAID question. Maybe something like:

"
We detected you bring up the possibility of setting up a RAID. To reduce the number of answers outside your context, please help us focus the discussion in the right direction by providing the following limiting factors:

(a) Your budget
(b) The maximum number of HDs your system can handle (due to case, motherboard or dedicated RAID card)
(c) The desired size of your RAID

Also be reminded that:
- RAID is no substitution for backup
- RAID protects against a HD failure
- You still need backup
- RAID won't protect your data from data corruption, deletion (accidental or malicious), viruses or total system meltdown
- YOU STILL NEED BACKUP!
"

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Wed Dec 05, 2007 4:12 pm

First of all i 'd like to thank you all very much for your replies...

Finally it was a good oportunity for a detailed conversation about RAID matters, even if this conversation went far away from my first post. But i think that it was very very usefull, not only for me but also for any other person who reads those posts.

Personally, i learnt some more, like RAID 6 array. i have read somewhere about this but i dont remember exactly, so pls let me know some more information (just for general knowledge since my M/B does not support this kind of RAID).

By the way, recently i read that Microsoft has developed a new OS (named Microsoft Home Server or something like this) based on Microsoft Server 2003. I didnt find many details about this but it looks like a good solution for my needs and also provides the capability to change or add/remove drives without problem, even to use different sized drives. The final storage capacity is a little bit worst than RAID 5 (this is exactly what i read...). If someone knows something more pls let me know.

Just for the information WD provides 5 years warranty for its HDDs that are destined for RAID usage.

Back to my first post i have to point out that i never took a straight answer.... :D since my question was what specific drives is better to use.

After all this discussion here and after many hours of thinking(!!) i have not concluded about HDDs yet. I 've got all parts of the HTPC in my hands except (...guess...) HDDs for RAID array!!! waiting ...i dont know what...!

Giving my previous experience in RAID 5 array (4xWD 5000ABYS) at my office, i thought it was very easy to build a system like this at my home...
But after the whole discussion here im not sure anymore. I ve heard many ideas and i really thank you for these.

Until yesterday i had decided to use 4x Samsung HD103UJ in RAID 5 array (using the onboard RAID controller; Intel ICH9R), since i found them in a very good price (~209 euro eachone) and also i could change them into a RAID 10 array if something went wrong. But today im not so sure. I saw some posts suggesting RAID 6 (?) with 5 or more drives, dedicated SATA/RAID controllers with a cost more than the 4 drives and so on...

Guys i m a Home User not Deutche (whatever) Bank...!
Crazy enough to built a 3.500+ euros PC(!!) but still home user... :)

Some answers in few posts i still remember...
I think its better to use an independent drive for OS and applications and the RAID array for file storage. I guess its more safe and it makes better sense to me. In this case i would have 5 SATA HDDs in my system (4 for RAID and 1 for OS) and the last SATA port of the mobo will be used by BR/HDDVD (LG GGW-20L).
There still remain 2 eSATA ports in backpanel, with the capability of RAID 0 or 1 by another onboard controller (JMicron JMB363 PATA/SATA controller) but im not sure if i could use them as internal drives somehow.

This PC will be an "all-in-one" PC for me (file server (for my music/video files), HTPC (needs to be silent), gamePC (for my son), and a powerfull machine (for video editing and other applications)). So i dont have the money (and the space) to use a different PC as a server...

Well, any more ideas might be helpfull for me since the ordering of HDDs is a matter of -lets say- a week..!

Thank you all.

Terje
Posts: 86
Joined: Wed Sep 08, 2004 4:50 am

Post by Terje » Thu Dec 06, 2007 7:50 am

sailorman wrote:
Personally, i learnt some more, like RAID 6 array. i have read somewhere about this but i dont remember exactly, so pls let me know some more information (just for general knowledge since my M/B does not support this kind of RAID).
Very simply explained Raid 6 is Raid 5 where the parity duplicated over 2 different drives in the raid.

This allows you to have 2 disks failing vs. 1 with a normal Raid 5.

That said, it also introduces extra complexity in the Raid software, so in theory, you have a bigger chance of bugs in the raid controller software that messes up the entire raid (very rare on quality components, but unfortunately, I have seen happen a few times with very very nasty results).

An earlier posting in this thread said that it is very common to have a second drive fail while you rebuild the raid 5 causing full data loss.

I do not this this is entirely accurate. It is very rare that you get a second disk _failing_ during data recovery in my experience.

Unfortunately, what is a bit more common (but still fairly rare) is that when a raid has run long enough for a disk to fail, you might have bad blocks on other drives.

That is, not actually failed disks, just some bad blocks here and there. These bad blocks might not have been detected before as they are allocated by files that has not changed or been accessed for a long time.

After a disk failure, as the raid starts rebuilding, these bad blocks will eventually be read and the raid system will fail to recover that part of the raid as it is missing data from 2 disks for those blocks.

The effect is of course very similar to what the earlier poster stated. In worst case, the raid software hangs when this happens as you might get command timeouts and nothing more will happen.

In best case, the raid recovery continues but you will get empty blocks.

In quality raid cards, you either have automatic or manually triggered background scans of the data to test the raid for bad blocks so you can detect disks with bad blocks before you get a disk failure and loose data.

Raid 6 reduces the probability significantly for loss of data here as you essentially need a failed disk and 2 more disks with the exact same blocks failing before you loose data.

Terje
Posts: 86
Joined: Wed Sep 08, 2004 4:50 am

Post by Terje » Thu Dec 06, 2007 8:01 am

sailorman wrote: Well, any more ideas might be helpfull for me since the ordering of HDDs is a matter of -lets say- a week..!
Its of course more expensive, but in your case where you want this to be your file server as well (with potentially important data I assume), I would probably have considered 2 raid 1's made of 4 1TB WD drives.

It gives you better data safety than raid5 and while it quite a bit more expensive than 4 500GB drives (they throw those after you for free soon :)), you actually get 2GB of usable disk vs. 1.5 with raid 5.

If this is out of budget, I would not have any concerns using the 500GB WDs. The WD5000KS's I have are as silent as things get and performance is very good and they are dirt cheap now as well. I have one out of 6 suchs disk fail after 16 months, which is neither good or bad as I see it. Just in the middle.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Thu Dec 06, 2007 8:48 am

Basically when it comes to RAID, there is only a couple of things you need to know.

How much storage your get for using haow many drives.
How much redundancy you will get.

Here are a couple of examples for you to get 2TB of storage, with a max drive failure before you actually loose data.

RAID-1, 4x 1TB drives aranged as 2x 1TB drives that are mirrored, you would then see 2x seperate drives where you could have 1 failure per array before loosing data, the main benefit here is that you could back-up really important stuff from one array to another, you would in essence have 4 copies.

RAID-10, similar setup with a speed benefit, but a slightly higher risk rate.

RAID-5 4x 1TB drives, you can afford the loss of one drive, all 3TB would be listed as a single drive which makes things easy, one benefit is that you only loose 1TB (25%) to redundancy, the drawback is that you have 4 drives and only 1 needs to fail before you panic, if you were using 5 drives you would run a higher risk because you have more drives that can fail but you only loose 20% to redundancy.

RAID-5 3x 1TB drives, just like the example above, but you loose more storage to redundancy, but your chances of data loss are lower, 1 out of 3 vs 1 out of 4.

RAID-6 4x 1TB drives, just like the above example of RAID-5 with 4 drives, with one major difference, you would loose 50% to redundancy therefore only having 2TB of space, this gives you the same risk as the RAID-1 example, except that you would only have 1 array (and drive letter ideally) which makes data management easier.

I think that about does it for examples, you can figure out your chances with other capacity's and drive capacity's yourself.

I would still recommend software over hardware, for 2 reasons, the cost, the ability to move the array to another motherboard regardless of make, model or chipset, but you would have to use the same OS version.

I have no-idea whether you can use RAID-6 in software or not, and it is really overkill anyway. RAID should really be considered as a useful way of making lots of smaller drives into one bigger drive and gaining some redundancy - it should not be considered as a backup.


Andy

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Thu Dec 06, 2007 1:02 pm

Andy thx for the usefull detailed information. As i can see there is no reason for RAID 6, too risky to go for RAID 10, so -as i had thought- 4x1TB seems to be the better solution... starting with RAID 5 and if something goes wrong i would be able to change them into RAID 1... (of course before loading my entire data...)

andyb wrote: I would still recommend software over hardware, for 2 reasons, the cost, the ability to move the array to another motherboard regardless of make, model or chipset, but you would have to use the same OS version.

Andy

could u explain little bit more what do u mean by this above???
to use Windows to build the RAID array?? how could i do this without using the motherboard onboard controller??

thx

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Thu Dec 06, 2007 4:47 pm

If you wer to use windows 2000/2003/2007 (when released), and various options of Linux, they have the ability to do RAID-5 in "software", basically this means that the CPU takes control of the XOR functio anmongst others and this hits the CPU hard.

HARD by today's CPU's is a walk in the park, take for instance a server using a 754pin Athlon 64 3200. It is a chip that spends most of its time at 800MHz but can and will bump up to 2.00GHz when neeeded. With 1GB of single channel RAM (64BIT) and 4x500GB HD501 Samsung drives running in software it has no issues writing data from a laptop HDD (gigabit ethernet) at 26MB/s. The machine in question is running Windows 2003 Server edition, as far as I can understand this is faster than in W2K server, but slower than Linux.

As far as my testing goes, I tried (once the array was built) installing the same version of W2K3 server on a desktop PC and the attaching the drives........ it worked, I could "import" the whole array to a totally different "server". I then swapped a couple of cables and it still worked, I disconected a drive, it gave me a suitable warning, i shut down, plugged the same drive in (nuked), I told it to use the drive, it did it re-built. How much confidence do you need, I trust this, I would use it myself.

The main reasons why I like this is that so long as you use the same OS (in my case W2K3 server service pack zero (0, nada, nil) moving the whole 4-drive RAID-5 array from one piece of hardware to another was a piece of cake. This alone means that the risk of a company going bust and eBay failing to provide the goods of an identical RAID card are not a concern, likewise mobo dies..... simple answer that you already know, its portable, even if you get the cables mixed up. The bottom line is that you need the same OS, and all of the mobo drivers installed and the system is stable....... done.

Did I mention NOT having to spend £300 on a RAID card.

Andy

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Thu Dec 06, 2007 4:58 pm

i know what u mean...

its the same as what i read few weeks ago... an array of hard drives based on the MS Win Home Server (new release) or MS 2003 Server.

i think its not exectly a RAID array and the final available storage space is less than RAID 5, but not the half as in RAID 1.

anyway i will think about it the next few days cause right now im facing serious problems with the case space... my video editing card (Canopus) doesnt fit in the new case...! shit :evil:

by the way is it better to use an 64bit OS or a 32bit is ok... what could i get more using an 64bit OS (Vista or XP) except the recognising of 4 modules of RAM (DDR3)

thx anyway

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Post by andyb » Thu Dec 06, 2007 6:18 pm

Cancelled, could'nt write a thing :( and I cant even remember what I was going to write either.....


Andy
Last edited by andyb on Fri Dec 07, 2007 8:42 am, edited 1 time in total.

seraphyn
Posts: 322
Joined: Wed Nov 28, 2007 1:26 pm
Location: Netherlands

Post by seraphyn » Thu Dec 06, 2007 6:50 pm

andyb wrote:shit, this is nuts, taking sleeping poills, night
Insomniac as well? :P

Terje
Posts: 86
Joined: Wed Sep 08, 2004 4:50 am

Post by Terje » Thu Dec 06, 2007 10:47 pm

I agree with Andy, a software raid 5 is likely to be as stable as a cheap emulated raid5 card.

It will not be as robust as a proper raid 5 card as this is in reality a small computer on an expansion card which is protected from software bugs on the main machine.

That is, if your OS crashes due to a memory corruption (be it a bug in the OS or physical memory) you could with bad luck corrupt the raid5 code in memory and destroy the raid if you have a software/bios/driver emulated raid.

A proper raid5 card will however not be affected by this (unless its causing a power off). You will especially notice this if you do a lot of "weird stuff" on your PC so you get frequent hangs. I do not do a lot of weird stuff myself (I generally have months between each time I reboot my PC) but I occasionally do some development or testing that causes trouble.

I used to run both bios type raids (in a raid 1 config) and software raid in windows xp (hacked) before, and was quite annoyed as I usually had to do a revalidation or rebuild of the raid after such a crash. This has never happened with me on the areca. Only time I have had to rebuild there has been when testing to pull some cables or when I actually had a disk fail a couple of weeks ago.

Obviously, you pay for that extra security and convenience.

Software raid is only part of the server releases of windows. There are some "hacks" out there for XP that allows you to enable it by modifying some strings in the volumemanager binary, but it will probably give you hell to recover if you have a crash and need to use recovery cd's or if you install a hotfix that replaces the wrong binary :)

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Sun Jan 13, 2008 3:37 am

hi again...

finally i bought four WD7500AAYS (raid edition, 5 years warranty) in order to build my RAID 5 array, but....

i'm facing a big problem to build the RAID array in my system...

As i have told before my motherboard is ASUS P5E3 deluxe/wifi@n with INTEL ICH9R raid controller, CPU E6850 core2duo and i use Windows Vista 64bits Ultimate edition installed in a separate drive WD5000ABYS.

i assigned the drives as [RAID] from mobo's BIOS, pressed ctrl+I to go to the raid screen and determined the 4 drives(WD7500AAYS) in a RAID array but..... windows DO NOT see it at all!!

i installed all the fresh updates and drivers, i did a fresh installation of windows from the beginning but NOTHING.... windows still can not recognise the RAID array not even the 4 drives of the array!!

(ps the 4 WD7500AAYS drives only recognised by the OS when i assigned them as [IDE] from the BIOS.)

does anybody know something more about this problem??? a trick or something that i missed here??????

its my first time im facing a problem like this while i used a RAID array before but in Windows XP environment...

pls let me know asap cause it makes me crazy 3 days now...

thx

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sun Jan 13, 2008 9:41 am

andyb wrote:Basically when it comes to RAID, there is only a couple of things you need to know.

How much storage your get for using how many drives.
How much redundancy you will get.

Here are a couple of examples for you to get 2TB of storage, with a max drive failure before you actually loose data.

RAID-1, 4x 1TB drives arranged as 2x 1TB drives that are mirrored, you would then see 2x separate drives where you could have 1 failure per array before loosing data, the main benefit here is that you could back-up really important stuff from one array to another, you would in essence have 4 copies.

RAID-10, similar setup with a speed benefit, but a slightly higher risk rate.

RAID-5 4x 1TB drives, you can afford the loss of one drive, all 3TB would be listed as a single drive which makes things easy, one benefit is that you only loose 1TB (25%) to redundancy, the drawback is that you have 4 drives and only 1 needs to fail before you panic, if you were using 5 drives you would run a higher risk because you have more drives that can fail but you only loose 20% to redundancy.

RAID-5 3x 1TB drives, just like the example above, but you loose more storage to redundancy, but your chances of data loss are lower, 1 out of 3 vs 1 out of 4.

RAID-6 4x 1TB drives, just like the above example of RAID-5 with 4 drives, with one major difference, you would loose 50% to redundancy therefore only having 2TB of space, this gives you the same risk as the RAID-1 example, except that you would only have 1 array (and drive letter ideally) which makes data management easier.

I think that about does it for examples, you can figure out your chances with other capacity's and drive capacity's yourself.
You mentioned 5 disk raid 5. I think you should mention this option as well

RAID 1 + RAID 5. You can do 2x 1TB drives and 3x 500GB drives for 2TB of usable space and lower cost or 2x 1TB drives and 3x 1TB drives for 3TB of usable space and higher cost. RAID 1 on the first two drives for the OS and any situation where write speed is more important (pagefile, database, etcetera). RAID 5 on the remaining 3 drives for the space and read speed advantage. Doing this gives you 2 redundant drives without having the drawback of hardware controller complexity issues with RAID 6.

oh and this "your chances of data loss are lower, 1 out of 3 vs 1 out of 4." is wrong. Literally 1 out of 3 means 33% chance of data loss and 1 out of 4 means 25% chance, which contradicts the intention of your statement. The chances of data loss are similar. The amount of potential data lost in the second case is what changes in direct proportion to the number of drives not the chance of losing it (which also changes but is not 1/3 vs 1/4 as implied).

murtoz
Posts: 122
Joined: Tue Mar 13, 2007 12:24 pm
Location: Wiltshire, UK

Post by murtoz » Sun Jan 13, 2008 11:02 am

sailorman wrote:i installed all the fresh updates and drivers, i did a fresh installation of windows from the beginning but NOTHING.... windows still can not recognise the RAID array not even the 4 drives of the array!!
You sure you installed the right drivers? You should be using ICH9R RAID drivers. Asus.com is down currently otherwise I'd post the link, but have a look on your driver disk.
Also if you are trying to install windows to the raid array you'll need (the vista equivalent of) an F6 driver, during boot.

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Mon Jan 14, 2008 3:48 pm

i found the problem finally ... for some reasons windows vista installed wrong drivers for ICH9R controller, something like INTEL 82801/ICH9... actually it was not absolutely wrong but it didnt work.

when i manually installed the drivers for INTEL ICH9/ICH9R everything went fine....

one more thing... while my RAID 5 array (4x750GB) has 2095,9 GB available space, windows automatically made two different partitions in the array, one 2048GB and one with the rest of space (47,9GB)...

the problem is that I can not make anything with the second, smaller partition... no allocation, no format, nothing...

does anybody know why is this happened??

LAThierry
Posts: 95
Joined: Tue Nov 14, 2006 4:15 pm
Location: Los Angeles, California

Post by LAThierry » Mon Jan 14, 2008 4:04 pm

If I'm not mistaken, some 32-bit operating systems might have a 2TB disk size limit. A RAID is viewed as a "single" disk by the OS. To go beyond that may requires one of the 64-bit version of your OS.

EDIT: Did a bit more research... The limitation could come from either the OS or the motherboard's SATA controller.

For 32-bit Windows XP Home edition (32-bit) I'm nearly absolutely sure there's the 2TB drive limit. I believe XP Pro (32-bit) goes around this with "dynamic drives" If you have a 32-bit Vista, I'm not sure which category it falls into... None of the 64-bit OS (XP Pro, Vista, Linux...) should have the 2TB size limit.

Now for the on-board controller part, most of them (aside from high-end / server motherboards) are 48-bit and will hit that 2TB limit as well. To go beyond that requires a 64-bit LBA controller, typically not found on onboard SATA but available on dedicated RAID cards.

You might be able to access all of you available disk space by creating two partitions.

JazzJackRabbit
Posts: 1386
Joined: Fri Jun 18, 2004 6:53 pm

Post by JazzJackRabbit » Mon Jan 14, 2008 9:46 pm

I've been reading up on 2TB limit as well and to add to what LAThierry already said XP64 bit and Vista 64 bit can have partitions greater than 2TB, but they cannot be bootable. If you gotta boot, you'll have to boot from <2TB partition. Not that big of a problem.

Another thing I found is that apparently you can also join two basic disks into one dynamic or spanned disk (not sure about terminology here) to create appearance of one single partition >2TB even if OS/hardware does not natively support it. Essentially you will be software spanning several 2TB disks into one.

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Tue Jan 15, 2008 3:09 am

Just so there's no confusion...

When using Windows systems, you'll need to use a GPT disk for arrays that are larger than 2TB. This disk is not bootable. Windows x64 and Server 2003 (32 and 64-bit) support this type of disk.

For more information, see here.

Spannning disks is just a bad idea in my eyes. Under very few circumstances would you ever want to use JBOD.

sailorman
Posts: 34
Joined: Tue Nov 06, 2007 2:31 pm
Location: Germany

Post by sailorman » Tue Jan 15, 2008 11:39 am

LAThierry wrote:If I'm not mistaken, some 32-bit operating systems might have a 2TB disk size limit. A RAID is viewed as a "single" disk by the OS. To go beyond that may requires one of the 64-bit version of your OS.
but i have an 64bit OS!!! windows Vista Ultimate 64bit!

probably the problem comes fom SATA controller as u said...

but also my problem is that i can NOT do anything with the rest of my available space (47.9GB)... no partition, no format, nothing....
it's just be there...unusable!

its not big deal in front of 2TB but i could use it for backup of some program files for example... i dont know why is this happened.... does someone else??

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Thu Jan 17, 2008 6:37 pm

Don't bother with RAID-5 if you aren't going to buy a proper RAID card. The write speeds are garbage, and it has nothing to do with CPU power. If you're fine with 4 MB/sec, by all means. Otherwise, read here:

http://episteme.arstechnica.com/eve/for ... 9008479831

RAID-5 via Windows Server 2003 or a hacked version of XP sucks far less, provided you're willing to tie the health of your RAID array to your main install. It doesn't support any form of capacity expansion.

Post Reply