Four notebook drives in RAID 1+0 a desktop?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Four notebook drives in RAID 1+0 a desktop?

Post by colin2 » Mon Aug 27, 2007 1:13 pm

I'm planning on using the BIOSTAR TA690G which can handle four SATA II drives. I was planning on using two of these

Western Digital Scorpio WD2500BEVS 250GB 5400 RPM 8MB Cache Serial ATA150

in RAID 1. $190 each so $380 total; I'm willing to pay for the redundancy. But then I remembered the "inexpensive" part in RAID and noticed that these

Western Digital Scorpio WD1200BEVS 120GB 5400 RPM 8MB Cache Serial ATA150 Notebook Hard Drive

sell for only $70 each. Four of them In RAID 1+0 that would be only $280 total and provide almost as much space.

I have a feeling this is a bad idea but I'm not sure why. I'd double power and noise, but both are pretty low already.

lm
Friend of SPCR
Posts: 1251
Joined: Wed Dec 17, 2003 6:14 am
Location: Finland

Post by lm » Mon Aug 27, 2007 3:29 pm

What are you trying to achieve with this setup?

ddrueding1
Posts: 419
Joined: Sun Sep 19, 2004 1:05 pm
Location: Palo Alto, CA

Post by ddrueding1 » Mon Aug 27, 2007 4:14 pm

Doubling power and noise almost doesn't matter with 2.5" drives in a desktop; they are so quiet to begin with. By doing RAID10 you will get 60-90% more transfer rate, but will get an ~5% hit in access times. 5400RPM drives already have a pretty lame access time as it is, so this will not be fast. That said, the RAID10 setup will be faster than the RAID1 setup. Combine that with the lower cost and I think you have a winner.

How do you plan on mounting the drives? I've seen hot swap enclosures at Fry's that stick 4x2.5" SATA in a single 5.25" bay. Not sure how much quieting/cooling that gives, or how necessary it would be.

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Mon Aug 27, 2007 4:50 pm

lm: thanks. The list would be low power and noise, resiliency (the ability to carry on computing with one drive failure rather than having to do a time-consuming system restore from backup), and then GB/$.

(I guess the alternative would be to use an external usb drive to do a full backup plus some easy software that would automate a full system restore if the internal HD failed. Since you can get a 500GB external USB for $100 or so that might be more cost-effective; it's the software part I know nothing about.)

ddrueding1: Sadly the Biostar mobo only does RAIDs 0, 1, and 1+0, though I guess I could buy a PCI RAID card.

If I did this I'd try to elastic-mount them all in the drive cage of an Antec Solo. The Solo provides for elastic-mounting only three drives, but they're tiny things so it should be possible to get four in with good airflow.

ddrueding1
Posts: 419
Joined: Sun Sep 19, 2004 1:05 pm
Location: Palo Alto, CA

Post by ddrueding1 » Mon Aug 27, 2007 5:11 pm

Raid 1+0 == RAID10

RAID 0+1 is different (striping mirrors opposed to mirroring stripes)

From Wikipedia:
http://en.wikipedia.org/wiki/RAID#Nested_RAID_levels
The key difference from RAID 0+1 is that RAID 1+0 creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses as long as no two drives lost comprise a single pair of one mirror.

jessekopelman
Posts: 1406
Joined: Tue Feb 13, 2007 7:28 pm
Location: USA

Re: Four notebook drives in RAID 1+0 a desktop?

Post by jessekopelman » Mon Aug 27, 2007 7:53 pm

colin2 wrote:I'd double power and noise, but both are pretty low already.
The nice thing is that doubling noise does not mean doubling loudness. Generally, it takes a 10X increase in noise to perceive a 2X increase in loudness. Doubling noise should only produce about a 23% increase in loudness. Sometimes nonlinear relationships are your friend.

StanF
Posts: 58
Joined: Tue Oct 10, 2006 9:39 am
Location: Texas
Contact:

Post by StanF » Tue Aug 28, 2007 2:30 am

I like the idea of four 2.5" drives.

However, more drives = less reliable

You are more likely to have failures if you have four drives.

drees
Posts: 157
Joined: Thu Jul 13, 2006 10:59 pm

Post by drees » Tue Aug 28, 2007 10:48 am

ddrueding1 wrote:Doubling power and noise almost doesn't matter with 2.5" drives in a desktop; they are so quiet to begin with. By doing RAID10 you will get 60-90% more transfer rate, but will get an ~5% hit in access times. 5400RPM drives already have a pretty lame access time as it is, so this will not be fast. That said, the RAID10 setup will be faster than the RAID1 setup. Combine that with the lower cost and I think you have a winner.
Just to clarify, in theory performance for these types of operations should be like this RAID levels (best case scenarios):

RAID 1 (2 drives):
Reads: 2x (good controller can submit read requests to both discs)
Writes: 1x

RAID 10 (4 drives):
Reads: 4x
Writes: 2x

Note that these numbers are only reachable assuming you have the perfect RAID controller. Latency for any single small read won't be any faster than a single drive, either. But any time you start large read/write operations the potential for increased performance is there. Random read loads will typically improve a good deal as well.

A good RAID controller is critical to get good performance. Preferably a good hardware RAID controller with battery backed memory.

Do note that often for streaming reads depending on the RAID controller is often cut in half unless you have multiple applications performing reads.

And yes, the chance for disc failure goes up linearly with every disc to add to the array.

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Tue Aug 28, 2007 10:50 am

Thanks ddrueding1! I'm slowly wrapping my head around this. The advantage, to pick up on StanF, is that while you certainly increase arithmetically the possibility that you will *have* a drive failure each time you add a drive, the mirroring provided by RAID 1+0 (or RAID 10!) makes it much much less likely (compared to a single drive) that you will experience the *simultanteous* joint failure of two drives that would cause you to lose data and functionality.

(I assume the software has a routine that tells you when one drive conks out and automatically rebuilds when you stick a new one in.)

The real cost, aside from the money, seems to that it makes the PC a more complex machine, and mobo-provided RAID in particular has a poor reputation. So if I'm increasing chances that it won't boot or screw itself up in other ways, I might be better off with just one drive and some sort of external backup/restore solution.

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Tue Aug 28, 2007 10:53 am

drees, does "a good hardware RAID controller" mean having a dedicated PCI card do this an not the Biostar mobo? If so can you suggest a card?

drees
Posts: 157
Joined: Thu Jul 13, 2006 10:59 pm

Post by drees » Tue Aug 28, 2007 11:30 am

colin2 wrote:The advantage, to pick up on StanF, is that while you certainly increase arithmetically the possibility that you will *have* a drive failure each time you add a drive, the mirroring provided by RAID 1+0 (or RAID 10!) makes it much much less likely (compared to a single drive) that you will experience the *simultanteous* joint failure of two drives that would cause you to lose data and functionality.
Exactly! Any time you experience a drive failure in a RAID array, you should try to replace the failed drive as soon as possible. Often drive failures happen in batches so in enterprise setups it's very common to have hot-spares which can go online automatically to replace the failed drive.
colin2 wrote:drees, does "a good hardware RAID controller" mean having a dedicated PCI card do this an not the Biostar mobo? If so can you suggest a card?
Onboard RAID controllers are typically aren't real hardware RAID controllers but consist of a software stub and are actually software RAID controllers.

I have never trusted these fake hardware RAID cards myself, so if I want to RAID drives using the onboard controller I will usually use software RAID (I'm a Linux guy).

As far as hardware RAID controllers go, I've had good experiences with 3ware cards. Adaptec, Areca, Promise, Highpoint all make some good cards, too. Keep in mind that any card less than $50 is very likely to be fake hardware RAID. Look for something that says it does the processing onboard and also supports RAID 5 for something that is likely to be a real RAID card.

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Tue Aug 28, 2007 12:33 pm

This is very useful. So I take it that without a real hardware RAID card, this scheme might not be a good idea.

Related question for anyone: I'm noticing that the TA690G's manual (a) assumes you're connecting a hard drive to the IDE header and (b) does not explicitly provide for booting from a SATA drive or array thereof. Is it possible that this board requires an IDE HD to boot?

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Tue Aug 28, 2007 12:50 pm

drees wrote:As far as hardware RAID controllers go, I've had good experiences with 3ware cards. Adaptec, Areca, Promise, Highpoint all make some good cards, too. Keep in mind that any card less than $50 is very likely to be fake hardware RAID. Look for something that says it does the processing onboard and also supports RAID 5 for something that is likely to be a real RAID card.
I'd like to make a correction - I'd say anything below a $200 price point is "fakeRAID". For example:

http://www.newegg.com/Product/Product.a ... 6816115027

The HighPoint RocketRAID 2310 is a software RAID solution, but supports RAID5. For a list of what cards have dedicated RAID processors and which don't - look here.

For anything but RAID5 or 6, the onboard controller or a software RAID card will be fine. RAID0,1, and 10 don't require any calculations to complete, so you're not going to get any noticable boost in performance by getting a $300 hardware RAID card.

ddrueding1
Posts: 419
Joined: Sun Sep 19, 2004 1:05 pm
Location: Palo Alto, CA

Post by ddrueding1 » Tue Aug 28, 2007 12:53 pm

drees is correct that all the benefits of RAID cannot be achieved with the onboard controller, but that doesn't make it a bad idea. You won't see the 4x read, 2x write benefits; it will be more like 1.8x write/read with ~3-5% CPU load, but that isn't really a big deal. It will still be faster and more reliable than a single drive.

I'm not familiar with the TA690, but SATA RAID has had a bit of an evolution. Here is the brief history:

1. In the beginning, the SATA ports on motherboards were nothing more than integrated PCI expansion cards; requiring setup outside of the BIOS and a boot disk ("press F6...") to get it to install.
2. Then SATA was integrated into the BIOS, so SATA drives could be seen; but setting up a SATA RAID array still required a floppy during install.
3. Finally, Vista (and Ubuntu) allow installation to BIOS-configured RAID arrays without any issue whatsoever.

I presently have Vista Ultimate installed to a RAID-0 of SATA Raptors using the onboard controller and no special tinkering was required. The array has even survived a motherboard/chipset change.

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Tue Aug 28, 2007 1:10 pm

Hmm. Thanks. I'd be using Windows XP.

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Tue Aug 28, 2007 2:08 pm

Ok, a little more searching including this

ftp://ftp.bookpool.com/sc/44/0789735644.pdf

says yes it's a Windows issue but I have a fighting chance of making it work at installation if I have a disk with the SATA drivers...

It's weird though. The Biostar manual has a nicely-written section explaining RAID concepts, but nothing about what you do when to make it happen. If you're booting off a RAID array, presumably the RAID must be set up first.

ddrueding1
Posts: 419
Joined: Sun Sep 19, 2004 1:05 pm
Location: Palo Alto, CA

Post by ddrueding1 » Tue Aug 28, 2007 2:08 pm

As far as I know, any RAID array in XP will require a floppy disk (yes it must be a 3.5" floppy), with the correct files on it. You then press F6 during the initial stages of the setup and it detects the controller for you. The same ports when configured as single drives don't require this step.

Nick Geraedts
SPCR Reviewer
Posts: 561
Joined: Tue May 30, 2006 8:22 pm
Location: Vancouver, BC

Post by Nick Geraedts » Tue Aug 28, 2007 4:49 pm

You can also use a program like nLite to integrate your RAID drivers into the install disc. That's what I've done with both of my P5B systems - just integrate the textmode ICH8R drivers, and all goes well. :)

colin2
Posts: 145
Joined: Wed Jul 11, 2007 2:40 pm
Location: Seattle

Post by colin2 » Thu Aug 30, 2007 8:43 am

Thanks to both of you for the advice and encouragement and for turning me on to nLite.

Been looking at notebook drives again -- 160GB may be the sweet spot right now in size and GB/$ terms, and the Hitachis look attractive for their ultra-low powed consumption.

didi
Posts: 62
Joined: Tue Aug 28, 2007 7:44 am

Post by didi » Thu Aug 30, 2007 10:12 am

I have done some testing a while ago using four Samsung HM120JI 120GB drives.
I tried all kinds of configurations. RAID5 proved to be too slow (especially write speed) with software RAID controllers. Even tried Windows XP software RAID5 (officially not supported), which surprisingly worked better than with a RAID controller
RAID10 was a whole different story, very fast indeed (for notebook drives that is, don't compare to 3.5" RAIDs). RAID10 doesn't need the computing power RAID5 does, so even with low end controllers it should be ok.

Even if speed isn't a big advantage, the redundancy really is. The drives are so much more quiet and run way cooler. Less cooling needed, thus less/slower fans.

You might be right with the 160GB drives being the sweet spot right now, a few months before, 120GB's where it. Let's hope within a few months, the 200GB's go mainstream ;)

Post Reply