RAID5 using the motherboard only?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
shunx
Posts: 341
Joined: Sun Oct 06, 2002 1:20 pm
Location: Vancouver

RAID5 using the motherboard only?

Post by shunx » Sat Feb 24, 2007 5:07 pm

Some new motherboards support RAID 5, eg. Asus P5N32-E SLI/SLI Plus. Presumably you just need to plug in a few SATA harddrives to get this feature. Has anyone tried this out, and how well does it work?

P.S. Name some other boards that support RAID 5 if you know them, thanks.

angelkiller
Posts: 871
Joined: Fri Jan 05, 2007 11:37 am
Location: North Carolina

Post by angelkiller » Sat Feb 24, 2007 8:17 pm

DON'T DO IT!!

I've done it, and still regret it. :x

Sorry for such enthusiam, I've tried it and had terriable results. I had four 80GB drives, in a RAID 0+1 array. Works pretty good. (extremely noisy!) I was looking for a performance gain, so I dicided to migrate to a Raid 5. See my sig for details about my system. Anyways I go to the Raid 5.

I messed up in three places. First, I put all 4 of my drives in the Raid 5 array. This includes the Page File. (What your system, uses when memory gets full) So every time Windows writes to the PF, Parity data must be calculated. (Also note that your CPU does the parity calculations) Since the PF is already slower than the RAM, calculating parity data slows it down more. My second mistake was setting the stripe size to 32KB. I don't fully understnad how this affects things, but from what I've heard, this slows things down for gaming, and general usage. Third mistake was getting out of a Raid 5. Look at this thread, and you will see how much trouble it was to get out.

Now I'm running a single drive because I'm SICK of the so-called "benefits" of RIAD. Sorry for the rant. :oops: I don't want anyone else in the same situation I got myself in. I would advise against RAID 5 unless you have hardware RAID card. The mobo Raid is FAR inferrior to Hardware based cards. Especially for Raid 5.

And most new mobo's support Raid, and most Intel based boards support Raid 5 also.

shunx
Posts: 341
Joined: Sun Oct 06, 2002 1:20 pm
Location: Vancouver

Post by shunx » Sat Feb 24, 2007 10:42 pm

Thanks, the thread is convincing. I think I'll either use a separate controller or forget about the whole thing.

But apparently the Highpoint cards are also quite CPU intensive:
"On full speed transfers (either read or write), the cpu usage can go as high as 50% (average 30% throughout) on an Athlon 64 3000+. Idling cpu usage is a constant 5%."

Ethyriel
Posts: 93
Joined: Sun Dec 24, 2006 12:47 am
Location: Arizona

Post by Ethyriel » Sat Feb 24, 2007 11:52 pm

A lot of add on cards do their RAID 5 in software, too, and software RAID 5 will be slow. You still get the redundancy, but it's almost always going to be slower than a single drive. If you're in it for speed setup a level 0 array. If you want redundancy with a little speed improvement on reads, then go for level 1. If you want both, then get a RAID controller which handles level 5 arrays in hardware, I like 3ware, and Areca is also excellent. Adaptec and LSI are always good as well, Highpoint and Promise are pretty meh.

EsaT
Posts: 473
Joined: Sun Aug 13, 2006 1:53 am
Location: 61.6° N, 29.5° E - Finland

Post by EsaT » Sat Feb 24, 2007 11:56 pm

It isn't exactly "state secret" that RAID5 write performance of all integrated solutions is craptacular and can drop to 10-20% of single disk.

And here's speeds even with at least half-hardware based solution:
http://www.xbitlabs.com/articles/storag ... 20_12.html
(RAID5 well slower than single disk, even with big writes)

shunx wrote:P.S. Name some other boards that support RAID 5 if you know them, thanks.
Most motherboards using i965 and i975 chipsets.

shunx wrote:But apparently the Highpoint cards are also quite CPU intensive:
"On full speed transfers (either read or write), the cpu usage can go as high as 50% (average 30% throughout) on an Athlon 64 3000+. Idling cpu usage is a constant 5%."
Yep, that's what happens without own processor.
Apparently many HighPoint cards don't have own processor and are also otherway curious because they don't have own buffer/cache.
Also at least RR2300 can't interleave ("stripe") RAID1 reads even between files.

With separate controllers 33MHz PCI bus is quite cramped so PCIe would be way to go.

Instead of motherboard's (i975/ICH7R) RAID 'm myself looking towards AMCC/3Ware 9650SE, because it's fully hardware based with own buffer, both "intra" and "inter"file interleaved read of RAID1, all around performance appears strong in tests (Promise somehow slows in case of mirroring involved). Also what I've asked questions their support is very good. (also their manuals are most clearest)

Here's about only review with multiple cards:
http://www.tomshardware.com/2006/12/13/ ... index.html

EDIT:
Also one other thing with integrated solutions is that if you want to move data on HDs to next computer with least amount of hassle RAID arrays done by solutions of different makers are incompatible.
Apparently RAID5 is especially problematic in that aspect. My friend has once moved RAID1 array to different motherboard without loosing data but I suppose motherboards having same maker's chipset could explain that. Also Intel's RAID offers option for resetting disks in RAID1 array to non-RAID without loosing data (not possible with other RAID forms) which would be foolproof assurance of movability of data.
Of course with separate controllers there aren't these worries because you can retain controller when changing PC or motherboard.

Cerb
Posts: 391
Joined: Tue Apr 13, 2004 6:36 pm
Location: GA (US)

Post by Cerb » Sun Feb 25, 2007 12:23 am

First off, extra performance with RAID 5? Only for reads, and then only sometimes. In every other way it will be slower than 0+1, but gets you a little more space.

RAID other than 0 or 1 taxing your CPU will just not be fun. Good for cheap plain old storage, maybe, and the 0+1 thing on two drives might be neat, but...the grass ain't always greener.

Many controller cards that are fairly cheap are not 'real', but just the normal CPU-draining types with a fancy little boot BIOS and such, and tend to offload the tough parts (like parity checking) to the CPU.

In general, regular backups > fake RAID. It can be nice, but if you keep with it long enough, you'll get angry at it.

kakazza
Posts: 32
Joined: Fri Feb 23, 2007 1:27 am

Post by kakazza » Sun Feb 25, 2007 6:06 am

If you go for onboard RAID, then ICH8R. Intel RAID is by far the best onboard RAID out there.

alkolkin
Posts: 25
Joined: Tue Oct 17, 2006 9:34 am
Location: Florida
Contact:

Good experience with RAID 5 -- BACKUP is Best Practice

Post by alkolkin » Sun Feb 25, 2007 6:47 am

First of all, it is my FIRM belief that you need to have TWO physical RAID setups: one for the system with RAID 0 on two drives, and one for Data on as many drives as you need configured as RAID 5. I use 4 drives with 74GB Raptors and though they are fast as heck, they are noisy - I would use 3 Raptor 150's if I had the money to redo this and noise was not the main issue.

Now it is important to back up for several reasons. First, if a RaID 0 system drive fails, and you need to get back up quickly, best way is using a product called Acronis Workstation or Acronis Home - http://www.acronis.com. You can back up to an external drive or to a partition on the RAID 5 data drive. If you use the latter, then you can restore almost instantly to the single drive that remains from your RAID 0 and it even gives you the capability of starting to work WHILE the restore of your System drive is operating. Check the reviews on this product line, they are great and I know it works well. For the data on the RAID 5, you have some protection because it will survive the failure of one drive due to the striping. Still, backup daily for maximum protection. I would NOT put my PF on the RAID 5, there is no need for striping that file! I would back up data daily and the system partition weekly for fastest restore capabilities. Acronis will back up and restore to bare drives, to Linux, XP, Vista, and it is cheap! Only about $100 with all bells and wistles, much less if you do not need them all.

Now, if you have a modern dual processsor and set it up the way I suggest, you will NOT have any performance issues by the onboard RAID eating up too much CPU time. My AMD 4800+ NEVER uses more than 5% of its resources at idle and rarely goes over 80% when I am hitting it hard with work, defragmenting, virus scanning, etc. If you are using Vista, I find that I get no extra benefit from ReadyBoost because my RAID setup is so very, very fast. If you have a server situation, then there are advantages to hardware RAID, with a large cache and battery backup, but only if you have "a lot" of hits against it such as a large database or application server.

I am in a rush to go out now, but PLEASE comment back to me and I will be happy to answer any questions. I have 40 years of IT experience and I can tell you that for a desktop solution and even many server solutions, RAID 5 works well and the extra expense of a hardware RAID solution is only necessary for special situations or old style processors.

Hope this helps.
alkolkin

EsaT
Posts: 473
Joined: Sun Aug 13, 2006 1:53 am
Location: 61.6° N, 29.5° E - Finland

Re: Good experience with RAID 5 -- BACKUP is Best Practice

Post by EsaT » Sun Feb 25, 2007 8:43 am

alkolkin wrote:First, if a RaID 0 system drive fails, and you need to get back up quickly, best way is using a product called Acronis Workstation or Acronis Home - http://www.acronis.com. You can back up to an external drive or to a partition on the RAID 5 data drive. If you use the latter, then you can restore almost instantly to the single drive that remains from your RAID 0...
There's no way to restore full content of RAID0 array to single disk of that array: it doesn't fit.

For pure performance RAID0 is best because with good implementation read/write speed increases in relation to amount of disks in array... but every additional disk in array (after first disk) halves statistical reliability.
For the data on the RAID 5, you have some protection because it will survive the failure of one drive due to the striping.
Now in what IT company you've worked?
It's precisely striping (spreading data between disks in array) which makes all data on array vulnerable to failure of single disk. RAID5's redundancy is achieved by using space worth of one disk for parity data, which is also its achilles heel because it makes writes expensive/slowish: there's always need for parity calculation and extra write required for storing result. (and small write can require first doing two reads before even parity calculation is possible)

With good implementation RAID5's read speed can increase nearly similarly to RAID0 (with one disk less in array) but in writing speed can exceed that of single disk only with big writes, with small writes performance drops very easily under single disk.
If both good all around performance and redundancy are needed RAID10 is best solution.


Some write speed comparison between hardware and software (CPU) based RAID5 solutions:
http://www.gamepc.com/labs/view_content ... 505&page=9
(difference between ICH7R and ICH8 isn't big, both do RAID5 calculations similarly with software/CPU)


Here's good page for visualizing performance differences between RAID modes with "entry level" hardware solution:
http://www.xbitlabs.com/articles/storag ... 20_10.html
Althought this controller seems to be incapable to interleaving RAID1 reads. With good implementation RAID1 can give very similar read speed to RAID0 with same amount of disks. (RAID1's write speed is always similar to single disk)

If you have a server situation, then there are advantages to hardware RAID, with a large cache and battery backup
I think battery backup for whole PC, aka UPS, is more important than battery backup for RAID controller, regardless is it server or home/workstation PC. :wink:

alkolkin
Posts: 25
Joined: Tue Oct 17, 2006 9:34 am
Location: Florida
Contact:

Response to EsaT

Post by alkolkin » Sun Feb 25, 2007 9:57 am

I was in IT at RCA, Chase Manhattan Bank, Wells Fargo Bank, and at QMS. Hope that I have answered the intent of your question. I have been "retired" for 5 years.

Well, you may be right about the RAID 0 restore; I have never had my RAID 0 fail with a faulty drive! This may be particularly true with Acronis because it does a sector by sector backup rather than a pure file backup, even though you have a choice of doing a file backup instead. With regard to multiple drives in the RAID 0, you are right about speed and reliability. Most users of the desktop, however, will not invest into more than 2 drives in my, untested and unverified, opinion.

With regard to RAID 5, yes writes may be slower, but I have had a drive fail and have been able to replace it when a new one came in and been able to work until that time. With my Raptors, the responsiveness was reduced, but still quite adequate until the fourth drive was rebuilt. Again, with Raptors, a desktop computer will not need the kind of frequent writes to the DATA drive that will offer too much of a noticeable reduction. Clearly, some applications will, but one has to carefully look it his/her own needs. So, though statistically the probability of one drive failing increases with the number of disks, I do not understand how the data could be lost with the loss of one of the disks. Perhaps I misunderstand.

With regard to hardware raid being faster, MOST desktop users will not experience much of a difference even though benchmarks clearly show the hardware raid's superior speed. I can imagine situations such as digital imaging where a desktop is used for intensive writing, but I cannot imagine that it is the need of the majority of users to justify the extra expense.

With regard to battery backup, yes I agree about a UPS attached to the system as a vital part of overall protection of data, hardware components, and your peace of mind. Redundancy of the cache backup is just an additional margin of protection.

So, what no site really talks about is my philosophy of separating the RAID functions across different physical drives, e.g. separating data (perhaps even the digital imaging data from the WP and other output). In other words, RAID 0 applied to the system with its frequent reads and writes, and for data which requires one to have an extra level of protection not required of the OS. Yes RAID 10 is beautiful but expensive. I have not done the total calculation of costs to determine whether my philosophy is more or less expensive than is RAID 10, but intuitively I believe it is less.

Thank you for the effort you put into this response. I am addicted to learning this stuff better, more so than I was when I worked for IT and was responsible for THEIR computers.

Alkolkin

EsaT
Posts: 473
Joined: Sun Aug 13, 2006 1:53 am
Location: 61.6° N, 29.5° E - Finland

Re: Response to EsaT

Post by EsaT » Sun Feb 25, 2007 11:29 am

alkolkin wrote:So, though statistically the probability of one drive failing increases with the number of disks, I do not understand how the data could be lost with the loss of one of the disks.
In RAID0 content/data of every file, maybe except those smaller than block size in stripe, are spread across disks in array. Now how many files stay usefull, or even partially readable, if part of it is missing?
(text data could be readable but it might be at the least inconvenient if half of the pages are missing, especially if those contain account data)
Yes RAID 10 is beautiful but expensive. I have not done the total calculation of costs to determine whether my philosophy is more or less expensive than is RAID 10, but intuitively I believe it is less.
Despite of 50% capacity "waste" with current HD prices (500GB for ~130€) even RAID10 would be affordable if data amount isn't really big and consistent performance is wanted. Even most of the digicams produce quite small (too tightly compressed) images. Also most people don't use memorycard whose backup during transfer to PC takes one DVD.

And single array is always easier to manage than multiple different arrays, having RAID0 and RAID5 arrays simultaneously might not be even possible in many (especially older or cheap) motherboards because of lack of enough SATA ports. Now there's jerry-rigs, like Intel's Matrix RAID which allow dividing disks between multiple different RAID arrays (split two disks between RAID0 and 1 for example) but I would suspect performance of those to be something not desirable if both arrays are operated simultaneously. (reading something from other, Windows swapping memory content to other)


alkolkin wrote:I was in IT at RCA, Chase Manhattan Bank, Wells Fargo Bank, and at QMS...
I have been "retired" for 5 years.
And still they continue bothering you when current staff starts asking questions what command line means? :lol:
Sorry... I got seriously twisted sense of humor.

Arvo
Posts: 294
Joined: Sat Jun 10, 2006 1:30 pm
Location: Estonia, EU :)
Contact:

Post by Arvo » Sun Feb 25, 2007 11:53 am

Motherboard or cheap controller provided RAID5 can be usable in desktop PCs only if you don't use it for constant and/or speedy writing. What you can't do with such RAID5:

- use it as a system drive with page file
- use it as fast streaming destination

If you need big storage space (1.5TB and more currently - three times of maximum available HDD size, not counting 750GB drives yet), redundancy (there's no cheap solution to back-up 1.5TB of data) and you can live with slow writing of big files to storage, then RAID5 is cheaper than RAID10 (or 0+1).

For example for 1.5TB storage you need 4x500GB drives in RAID5, but 6x500GB drives for RAID10. Of course RAID10 will be faster for copying big files (over 1GB or about half of system memory, using large cache in windows) into it, but you save 2 HDDs using RAID5.

alkolkin
Posts: 25
Joined: Tue Oct 17, 2006 9:34 am
Location: Florida
Contact:

Raid 0 WILL restore to single drive

Post by alkolkin » Tue Feb 27, 2007 1:44 pm

Well, I did not mean to test this EsaT, BUT, my first system drive failed!! I should not have said it never failed in my email -- that is what I get for being such a big mouth.

As I said, you may be right about not being able to restore from a two drive system to a one drive system with RAID 0 because Acronis does a sector by sector backup, so of course restoring to a single drive in a 2-drive RAID 0 might not be possible. Fortunately, you can also do a file-by-file restore to bare metal and that worked just fine, even though it would have not been possible with a sector-bysector restore. Phew. I am REALLY glad you were wrong.

Also, a subsequent reply said that I would loose data if a RAID 0 failed, when I said that I did not understand how we could lose data from a RAID 5. So, I think there was a misreading of my intent.

The person who replied that large chunks of writes or streaming data would be better with a separate controller was also right. That is why when someone asks the question about which is better, a built-in RAID or a separate controller, the answer is ALWAYS "It Depends." I guess that is why we get the big bucks. I guess now that I am retired and getting ever closer to 70 years old, the work "Depends" never comes out of my mouth! LOLOLOL

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Tue Feb 27, 2007 2:37 pm

I have no first-hand experience with on-motherboard RAID implementations, but I've seen very little, if any, encouraging reports on them. Furthermore, I think it's very misleading to call these "hardware" RAID: the implementation is firmware at best, and the extra real-time work needed to manage RAID is still done by your CPU.

The true hardware RAID cards have both the RAID algorithms and the management work all implemented in (often specialized) hardware. If you can't tell by the literature, you can tell by the price. :)

I've read over and over again that the 3ware RAID cards are top-notch. Most people say that if you can afford one, buy one. The downside to any hardware RAID solution, as has already been mentioned, is that if the card itself dies, you need to find an exact replacement.

Now, what I've personally found to be a great compromise, is Linux software RAID. If you're at least semi-comfortable with Linux, you should be able to do use its md (multi disk, aka RAID) functionality. In my opinion, Linux md beats (or at least ties) on-board RAID implementations in every way:
  • Performance is the same (some would argue better)
  • It's free
  • Not hardware dependent. I've literally moved my RAID array from one machine to another, with no problems. And it's more flexible than that: you could conceivably replace, e.g. a PATA drive with a SATA drive in your RAID array.
  • The Linux md code is stable, mature, and well tested. Who knows anything about the code quality of these quasi-hardware RAID solutions?
Finally, many people (myself included) use RAID 5 as an excuse to not backup. As has already been said, backing up 1.5TB or more is quite expensive. But the intent of RAID5 is availability---if one of the drives that hosts your mission-critical database goes down during business hours, you don't have to close up shop for the day.

Just my thoughts!
Matt

kakazza
Posts: 32
Joined: Fri Feb 23, 2007 1:27 am

Post by kakazza » Wed Feb 28, 2007 4:31 am

matt, do you happen to know how md can warn you when there is a faulty drive?
Does it mail you? Does it play a sound? What are the options?
I wasn't able to really confirm any of these.

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Wed Feb 28, 2007 5:29 am

kakazza wrote:matt, do you happen to know how md can warn you when there is a faulty drive?
Does it mail you? Does it play a sound? What are the options?
I wasn't able to really confirm any of these.
Yes. The tool for administering an md array, mdadm, has a "monitor" aka "follow" mode where it will act on state changes to the array. When a change occurs, it can either email you or run an arbitrary script (meaning you can have it do whatever you want).

I've seen this in action, too, as I had a drive "die" (it was actually a cabling problem, but the effect was the same).

EsaT
Posts: 473
Joined: Sun Aug 13, 2006 1:53 am
Location: 61.6° N, 29.5° E - Finland

Post by EsaT » Wed Feb 28, 2007 1:00 pm

matt_garman wrote:If you can't tell by the literature, you can tell by the price. :)
In my case price told that AMCC/3Ware prices in Finland are robbery... by adding 10% I got 8-port version from Germany...

The downside to any hardware RAID solution, as has already been mentioned, is that if the card itself dies, you need to find an exact replacement.
That's apparently problem of SCSI-department.
One friend works in IT department of bigger company and he said that even different firmware in controller might make it unable to find array.

In case of SATA situation is better, page 131:
http://www.3ware.com/support/UserDocs/3 ... rGuide.pdf

Finally, many people (myself included) use RAID 5 as an excuse to not backup. As has already been said, backing up 1.5TB or more is quite expensive.
And propably that won't be changing, Blu-Ray is already cramped from the start for backupping current high capacity HDs and there's no question that copyright mafia/nazis delay introduction of every new media as they quarrel which expensive use limiting (which will be cracked before media becomes even popular) system they force into standard.

Post Reply