RAID arrays: Portable between chipsets of the same Family?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

RAID arrays: Portable between chipsets of the same Family?

Post by smilingcrow » Wed Dec 21, 2005 2:34 pm

RAID arrays: are they portable between chipsets of the same family/generation?

I’m considering building a RAID5 array now with an Intel chipset using the ICH7R south bridge. It’s dawned on me that you can possibly only move an array between 2 PCs with the same RAID controller, but with motherboards for Conroe with the ICH8 south bridge due out in Q306, I’m wondering if there is any chance of the array being portable between the two related platforms?
The same question applies to other motherboard chipset families such as NVidia for Athlon 64 etc?
If the answer is no, then it will be such a hassle to rebuild the array that I doubt I will bother. The main point of me building the array, is so that I don’t have to restore my data from 40+ DVD-Rs in the event of a drive failure.

I looked at the option of using a RAID Card, as that would surely be easy to migrate between systems. SATA RAID5 controllers seem to be either PCIX or PCI Express (x4!), which gives issues of motherboard support and rather high pricing. I’m not sure if RAID5 on vanilla PCI (33/32) would be too bandwidth constrained for RAID usage, if such cards are available?

MoJo-chan
Posts: 167
Joined: Wed Apr 30, 2003 3:49 pm

Post by MoJo-chan » Sat Dec 24, 2005 2:39 pm

With RAID cards, arrays are usually highly portable. You can upgrade to a new model without problems. On-board RAID controllers are a different matter though.

For example, Promise on-board arrays are not usually compatible with Promise PCI cards, or even different chipsets.

One alternative is software RAID. Otherwise, get a board with PCI-X or PCI-e and a RAID card. That's what I did :)

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Mon Dec 26, 2005 4:50 am

MoJo-chan wrote:With RAID cards, arrays are usually highly portable. You can upgrade to a new model without problems. On-board RAID controllers are a different matter though.

For example, Promise on-board arrays are not usually compatible with Promise PCI cards, or even different chipsets.

One alternative is software RAID. Otherwise, get a board with PCI-X or PCI-e and a RAID card. That's what I did :)
Thanks. I’ve looked in to RAID5 and decided to not implement it right now. I’m going to wait for 64 bit 65nm dual-cores and when I build a new system with one of those, I’ll look again at the RAID5 situation. I figure that when I build that system, I won’t be looking to upgrade for quite some time, so the portability of the RAID5 array becomes less of an issue.

mongobilly
Posts: 58
Joined: Tue Mar 22, 2005 11:54 pm

Post by mongobilly » Mon Dec 26, 2005 1:38 pm

If you don't plan on switching OS platforms anytime soon, software RAID solves that issue. I've found software RAID5 to be more reliable than any of the (even high end stuff like 3ware Escalade (!)) (S)ATA cards I've tested.

Only thing better, in my experience, are high end SCSI cards if you got that money. But those drives aren't silent, far from it.

MoJo-chan
Posts: 167
Joined: Wed Apr 30, 2003 3:49 pm

Post by MoJo-chan » Mon Dec 26, 2005 2:45 pm

Simlingcrow, your situation sounds similar to mine. I have just built a new PC, and I want it to last. My current one is an XP2100 with 1GB RAM, which has lasted me for 3.5 years.

The new one is a 4000+ single core Athlon 64, with 2GB RAM and an Areca 1220 PCI-e SATA2 RAID controller. I hope it will last me another 3.5 years at least, and the RAID card should last even longer. I'm very happy with it, calculating MD5 sums for files benchmarks in at 252MB/sec from NTFS :)

I did think about using software RAID, but there are two disadvantages to it. First, you can't put your boot drive on a RAIDed partition. Second, software RAID is more prone to software errors, such as those caused by bad RAM.

Actually, I have seen arrays on cheap "win-RAID" controllers like the Promise ones, which basically rely on software anyway, destroyed by RAM faults.

One advantage of software RAID is that you can have different RAID levels for different partitions. RAID 0 for a temp drive, RAID 5 for data.

grandpa_boris
Posts: 255
Joined: Thu Jun 05, 2003 9:45 am
Location: CA

Post by grandpa_boris » Mon Dec 26, 2005 5:18 pm

DELL, which continually changes components they use to build their systems, has been feeling this pain on their own and on their customers' behalf. they have been pushing for a hardware RAID interoperability standard through SNIA (working group's page). it's highly unlikely they are going to succeed to any significant degeree any time soon.

software RAID solutions are hardware-agnostic, but currently there are no major vendors that offer significant OS-portability or offer it at the consumer-friendly pricing levels.

the simplest and most portable solution may be to use a networked storage approach: pick a software RAID solution and a convenient OS, build a RAID box, and use it as an external storage device, accessed either as an iSCSI target or an SAMBA/CIFS mount. currently, using iSCSI would be technically more challenging than SAMBA/CIFS. there are many guides on the web to building linux SAMBA servers and just about every consumer-level OS is able to access a CIFS/SAMBA mount.

this way you get hardware independence and the application/user level OS independence, but sacrifice some performance (your milage on this will vary, of course).

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Tue Dec 27, 2005 2:56 am

Thanks for all the replies.
MoJo-chan wrote:I did think about using software RAID, but there are two disadvantages to it. First, you can't put your boot drive on a RAIDed partition. Second, software RAID is more prone to software errors, such as those caused by bad RAM.
Actually, I have seen arrays on cheap "win-RAID" controllers like the Promise ones, which basically rely on software anyway, destroyed by RAM faults.
Yikes, that’s a scary thought; using EEC RAM makes a lot of sense if your RAID solution is not fully in hardware.
MoJo-chan wrote:One advantage of software RAID is that you can have different RAID levels for different partitions. RAID 0 for a temp drive, RAID 5 for data.
I took a look at the Areca 1220 that was mentioned in this thread and that may well offer this feature also.

On doing some research and digesting your replies, I’ve concluded that RAID5 is too much pain for not enough gain for my needs. I’m going to simply add a third drive and use it to mirror my important data but without using RAID at all. The data is fairly static, so I’ll just schedule an hourly copy of any data that needs archiving to the 3rd drive.

stretch
Posts: 9
Joined: Wed Jun 15, 2005 10:24 am

Worked with Promise -> SiL

Post by stretch » Tue Dec 27, 2005 9:21 pm

I happened to try two drives that had been used for raid0 striping in a machine with a Promise controller, and stuck them into a machine with a Silicon Image controller. To my surprise, the array showed up as "OK".

Both were configured (IIRC) as default hardware raid0 blocks.

I didn't do extensive testing, but I was able to read a large chunk of data, as well as write a bunch of stuff to the array, no errors of any kind.

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Tue Dec 27, 2005 10:50 pm

Actually, you can put your boot drive on raid, e.g. linux allows you to build a degraded raid0 partition (i.e. with one drive), then on reboot you then rebuild the raid0 parition fully. I haven't tried it, but there are at least a couple of coherent and comprehensible writeups to be googled.

Personally I am using an old AMD socket 754 MB to run a linux based RAID5 fileserver solution with 4xSATA disks. It started off as a debian distro then moved on to gentoo, showing that at least it is distro compatible, and I'm pretty confident that if you move the disks to another MB it should still pick up the RAID5 MD (not so sure about moving platforms though, e.g. from x86 to powerpc => endian change; maybe one of these days I'll try with qemu and powerpc/sparc/x86). I posted about my experiences in these forums, so you can search for it.

(BTW for those who did find my post and my problems with Cool N Quiet on my MB, it is now solved by the latest gentoo kernel (with no need for ACPI/DSDT work), 2.6.14-rc5.)

windows software raid on the other hand has been shown to have terrible performance so I'd stay away...!

Oh, and one of the reasons I ended up with a SW RAID solution was the same question you posed, or in my case, what if after a few years my RAID card dies and I can't source any more/are no longer compatible with my OS/MB/etc... so I went SW RAID!

MoJo-chan
Posts: 167
Joined: Wed Apr 30, 2003 3:49 pm

Post by MoJo-chan » Wed Dec 28, 2005 5:20 am

xarope: Sounds like a nice setup, I'd love to be able to do something like that myself. I don't really know enough about Linux though, and never had much luck with Samba.

smilingcrow: Yep, the Areca cards use ECC RAM for that very reason.

Ultimately, RAID isn't really about security. You still need to back up. What it does offer is reliability (i.e., you can loose a disc and not have to restore from backups or even suffer significant downtime) and performance (722MB/sec in ATTO!)

grandpa_boris
Posts: 255
Joined: Thu Jun 05, 2003 9:45 am
Location: CA

Post by grandpa_boris » Wed Dec 28, 2005 8:56 am

MoJo-chan wrote:I did think about using software RAID, but there are two disadvantages to it. First, you can't put your boot drive on a RAIDed partition.
this isn't a fundamental technological problem, but rather a limitation of insufficiently well thought out implementations. there are (enterprise-level) software RAID products that allow at least RAID-1 of a boot drive. it doesn't help you here, but i just want to make certain that people realize that this is not a pathological limitation of software RAID in general.
MoJo-chan wrote:Second, software RAID is more prone to software errors, such as those caused by bad RAM.
i would argue that if a system has a bad RAM, data corruption may occur whether RAID is in software or in hardware. after all, a program writing a corrupted memory block through a robust hardware RAID controller isn't going to leave data in any better shape than the same program writing a perfectly good block of data through a RAID driver that ends up somehow doing something very bad because it allocated memory in a bad RAM page. once you start considering memory corruption caused by bad RAM, all bets are off.

there is some academic work that is, in fact, striving to come up with ways to build "robust" computing systems that can survive disasters like RAM flaking without corrupting data, but this research is still very young.

smilingcrow
*Lifetime Patron*
Posts: 1809
Joined: Sat Apr 24, 2004 1:45 am
Location: At Home

Post by smilingcrow » Wed Dec 28, 2005 9:31 am

MoJo-chan wrote:smilingcrow: Yep, the Areca cards use ECC RAM for that very reason.
I was thinking more in terms of the desirability for the main system RAM to be EEC when using software RAID. To lose any files to data corruption due to RAM errors would be bad enough, but to lose a whole array would be a major hassle.
MoJo-chan wrote:Ultimately, RAID isn't really about security. You still need to back up. What it does offer is reliability (i.e., you can loose a disc and not have to restore from backups or even suffer significant downtime) and performance (722MB/sec in ATTO!)
I was looking at it from a similar perspective. I have a large amount of data that has recently had its Meta data changed. It’s not a big enough change to the data to warrant backing it all up again to 45+ DVD discs, but enough of a change to warrant making a copy on a second hard drive. Since that means adding a 3rd system disc I thought I’d look at RAID5.
Ultimately, I will probably go for RAID5 on my next system, but only after backing up my data to one of the next generation DVD formats. That will leave only about 10-15 discs worth of data, which is more manageable for archiving and restoration purposes. With that done, the lack of portability of the RAID array becomes much less of an issue. I’m quite sold on the idea of RAID5 now.

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Wed Dec 28, 2005 6:34 pm

Correct, RAID is HA, not Backup, so there are semantic distinctions... although with the cost of high capacity tape drives, I end up using a multi-tier backup-to-disk solution instead (multi-point backup to local disk using rsync, w/4 hour/daily/weekly snapshots, and mirrored via samba to the RAID 5 array).

With this setup, I have still found file corruption, but the multi-point (i.e. 4 hourly) backup allows me to at least get one or two generations earlier of that particular file (or if it's less than 4 hours new, chances are it can be regenerated as it's still fresh in my and my wife's minds... I'm talking about 100+MB powerpoint presentations etc, not stuff you can download again from the internet).

No off-site archive, no tape backup... there are limitations to doing this at home you know!

bexx
Posts: 75
Joined: Mon Dec 09, 2002 12:17 am

Post by bexx » Wed Dec 28, 2005 8:00 pm

Well I just finsihed setting up my new computer. All done with Linux software-raid. I actually used EVMS to build and manage everything and it was all pretty easy to do. root and boot partitions are raid1 and then I have a 4 drive raid5 array for storage. I actually built it as a 3 drive array and used evms to expand it to 4 drives... 19 hour process but it worked. Even if power goes out next reboot the expand will be un-done to the original state... thats pretty impressive imo. Right now I have 4x250GB drives and in a few months will make it 6x250GB drives in raid5... and eventually I'lll move it to raid6 when evms adds support.

This isn't 100% protection but it is vasty safer than my previous setup with nobackup/redundancy. If I had money I would love to get to play with more toys and keep copies elsewhere to rsync with but I just don't. Bang-for-my-buck software raid5 is as good as it gets. Actually dvdr's would probably be best but I'm not goig to burn 100 dvds. Actually eventually I hope to get the energy to use dvds becasue it would make me feel better... I just havn't found the right program to help automate it. I've done it manually before and it sucked ass.. in windows I'd take ~4.3GB of files put them in a zip file (no compression) and then creat ea 100MB par2 file incase part of the dvd gets scratched or is burned with errors... and then burn the image and ot it all again.

Anyways yea.... my only problem with the highend raid controllers, that are real hardware raid, not proprietty software-raid ;P, is that they cost more than the drives. I'd rather just buy twice as many drives and build software raid10 arrays. Actually that would be even safer because potential you could have multiple drive failures vs 1max with raid5.

xarope
Posts: 97
Joined: Sat May 03, 2003 8:16 pm

Post by xarope » Wed Dec 28, 2005 10:00 pm

bexx, for my curiosity, what distro did you use? debian/gentoo/centos/some other strange distro (puppy :-))?

bexx
Posts: 75
Joined: Mon Dec 09, 2002 12:17 am

Post by bexx » Thu Dec 29, 2005 12:29 am

Gentoo, of course ;)

MoJo-chan
Posts: 167
Joined: Wed Apr 30, 2003 3:49 pm

Post by MoJo-chan » Fri Dec 30, 2005 9:16 am

grandpa_boris wrote:i would argue that if a system has a bad RAM, data corruption may occur whether RAID is in software or in hardware. after all, a program writing a corrupted memory block through a robust hardware RAID controller isn't going to leave data in any better shape than the same program writing a perfectly good block of data through a RAID driver that ends up somehow doing something very bad because it allocated memory in a bad RAM page. once you start considering memory corruption caused by bad RAM, all bets are off.
Sure, sure, but I think maybe I didn't make myself very clear. For example, I have seen Promise controllers loose RAID arrays with bad RAM. I mean they actually corrupt the array metadata on the drives. Once I simply recreated the array (RAID0) in the same configuration as it had been and the machine booted perfectly, but it was still quite scary. On-board RAID tends to be quite basic and not very robust.

We are of course talking for home use here. All I can say is roll on 250GB holographic discs.

Post Reply