A question on RAID
Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee
A question on RAID
Hey guys, I was considering running RAID 0 on two of my harddrives off the motherboard. I was just wondering, if the motherboard fries, would there be any way for me to recuperate the data the two harddrives without having to buy the exact same motherboard? Would it be possible to recuperate the data using a program of some sort? Thanks
RAID0 has so simple structure (just interleaved stripes) that IMHO you should be able recover data easily, at least when your new motherboard has similar chipset. I mean here RAID, provided by NVidia or Intel RAID capable chipsets, not by 3rd party chips or controllers.
And most probably your disk(s) die far before than motherboard anyway
But, like Matija said already, there's little reason to use RAID0 in current desktop environment. You may gain some read/write speed for big files, but you'd better use separate disk for system and swap file then, otherwise Windows swapping and system access voids most performance benefits.
Well, if motherboard RAID and driver can use disks asynchronously and reorder seeks (like expensive server RAID controllers do), then things are much better; unfortunately I don't know this kind of details. If someone has such data, I'd interested a lot!
And most probably your disk(s) die far before than motherboard anyway
But, like Matija said already, there's little reason to use RAID0 in current desktop environment. You may gain some read/write speed for big files, but you'd better use separate disk for system and swap file then, otherwise Windows swapping and system access voids most performance benefits.
Well, if motherboard RAID and driver can use disks asynchronously and reorder seeks (like expensive server RAID controllers do), then things are much better; unfortunately I don't know this kind of details. If someone has such data, I'd interested a lot!
Oh? I did not know that . Is there any way to improve read write speed a reasonable bit using 2 harddrives using another form of RAID? (I planned on using two raptors) And I'm not sure that I understand this article but, http://www.tomshardware.com/2007/03/12/ ... page7.html seems to indicate that two raptors on RAID 0 seem to work faster read and write than one alone no?
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
How can people say this? RAID0 does provide a performance benefit when dealing with lots of disk access, but for most people, when they're typing documents, surfing the web, etc etc - RAID won't do anything, since those activities don't stress the hard drive subsystem.Matija wrote:There are no real performance benefits from RAID-0, anyway.
To give you an idea of the performance benefit - my main system will load Photoshop CS3 from a cold boot in about 5 seconds. I have yet to see any single hard drive system match that "real-life" performance. You gain significant performance for reading and writing large files - when I'm transferring files over my gigabit network, I'm actually network limited - that's over 115MB/s. Here's a screenshot of my HDTach results for my RAID0 array.
RAID0 does provide a benefit, if your work is disk limited. Just like with any system upgrade/modification, you'll see the most benefit if you're upgrading the bottleneck.
RAID1 can potentially provide better read speeds if the controller will handle reads from both drives, but the write performance is slightly worse than a single drive. In order to get the best of both worlds, you'd need RAID10, which when properly implemented will provide better overall performance than any other drive level.
This article is a bit misleading, or more precisely, there's some information missing.cowsled wrote:And I'm not sure that I understand this article but, http://www.tomshardware.com/2007/03/12/ ... page7.html seems to indicate that two raptors on RAID 0 seem to work faster read and write than one alone no?
Hard disk(s) performance is largely affected by two parameters:
1. seek time
2. sequential read/write speed
RAID0-ing drives increases sequential read speed up to 2x, but doesn't decrease seek time (except for asynchronous access, what may or may not possible with onboard RAID).
Some additional notes:
a) Raptor drives have almost twice faster seeks than ordinary consumer HDDs, including the newest ones.
b) Raptor drives have better sequentieal read/write speed than other drives with "small" platters (up to 166GB).
c) Raptor drives have about same sequential reading/writng speed than newer HDDs with big platters (200GB and more).
Looking at pointed article (page 7,8), it is clearly visible that sequential read/write operations are much faster for RAID and Raptors. In real life such sequential access is performed while copying large files (not within same disk/array!), capturing/reading uncompressed video etc. Almost sequential can be reading lot of small files from nearby location, this my occur in freshly installed system, but rarely.
On page 9 are somewhat more interesting graphs - simulating real-world situations. Servers are mostly for reading and RAID has some advantage; Raptors are clearly faster due to the low seek times too. Database and workstation graphs indicate that for slow-seeking drives RAIDing doesn't yield any reasonable performance gain.
On page 10 - windows bootup and sysmark simulations are almost not affected by HDD subsystem performance.
IMHO workstation performance graph (p9) is most close to real-life disk-intensive usage (excluding some kind of video processing, which is faster due to the large sequential reads or writes).
Unfortunately there was no comparison with new, faster HDDs.
Disclaimer
All above is my subjective opinion only
I run two PATA drives as RAID 0. More because I like the consolidated size than the increase in speed. Which is just as well as I noticed no increase in speed. Although there seems to be a view that the drives should be on different channels (ie, not just set up as master and slave on the same channel). However, (personally) having tried all combinations it made no real difference.
if you just want to speed up your HD subsystem, another option is to go with SCSI drives.
Access times start at 8ms and go down, and bandwidth for a single drive is 70mb/s.
the drawbacks is scsi drives are spendy. you will also need to buy a controller since most motherboards dont come with one. But in the end, you get what you pay for.
Access times start at 8ms and go down, and bandwidth for a single drive is 70mb/s.
the drawbacks is scsi drives are spendy. you will also need to buy a controller since most motherboards dont come with one. But in the end, you get what you pay for.
RAID 0 is for performance enhancement (to whatever degree) only. It provides no redundancy and no protection against data loss. With no parity data available, if one drive fails there is no sure way to reconstruct the data on using only the second drive. And the chipset has nothing to do with the protocol be it fake-raid like nVidia and some Intel or true hardware raid controllers.RAID0 has so simple structure (just interleaved stripes) that IMHO you should be able recover data easily, at least when your new motherboard has similar chipset. I mean here RAID, provided by NVidia or Intel RAID capable chipsets, not by 3rd party chips or controllers.
The lowest level RAID to use for data protection is RAID 1.
Re: A question on RAID
I wish I could find a good article I read on this. I believe it was Tom's Hardware that performed the experiment of trying to read RAID volumes across different South Bridges. Basically, cross-compatibility is pretty poor between manufacturers. If you build a RAID volume on an Intel ICH-based board, don't expect to read it with an nForce or VIA board.cowsled wrote:If the motherboard fries, would there be any way for me to recuperate the data the two harddrives without having to buy the exact same motherboard?
To give a general answer to your ?, you wouldn't need the exact same mobo, but you'd probably need one with the same brand/model of SB.
EDIT: Here it is.
-
- SPCR Reviewer
- Posts: 561
- Joined: Tue May 30, 2006 8:22 pm
- Location: Vancouver, BC
If you're running two drives in RAID0 on the same PATA channel then you'll get zero performance gain, if not a performance hit. PATA controllers need to switch between the drives that they're accessing, negating any advantage you'd get from a RAID0 setup. The beauty of RAID0 is the ability to use concurrent I/O on multiple drives.follow wrote:I run two PATA drives as RAID 0. More because I like the consolidated size than the increase in speed. Which is just as well as I noticed no increase in speed. Although there seems to be a view that the drives should be on different channels (ie, not just set up as master and slave on the same channel). However, (personally) having tried all combinations it made no real difference.
Achieving Better Performance
If you really want to improve your HDD performance, you'd best address the issue like enterprise class devices do - implementing cache which can be dedicated to read-aheads and coalesced writes.
RAID5 and RAID0 will both help with reads, RAID0 will help with writes, but RAID5 will be a penalty for writes (have to write the data in multiple places). RAID implementations are primarily for data protection. The real problem is the inherent difference between silicon speeds vs. mechanical spinning disks. Even with a little cache on the disk or controller, and multiple platters and heads, this differential is simply too great.
That's why if you buy a storage sub-system from EMC, IBM, or HDS, they will have a dedicated controller with dedicated cache. Obviously out of range for most of us.
The good news is that a company called Datacore has a product called UpTempo which uses part of your system memory and dedicates it to I/O. It uses a predictive algorithm to read ahead and pull blocks off your HDD ahead of time so you only have to read out of memory as opposed to off the disk.
I use it and it really works, but *only* when the disk is the bottleneck (which in modern systems is a large percentage of the time). When there is no storage bottleneck, there is no gain, but also no penalty. (Although I guess you could argue that point since some of the system memory which would normally be available is no longer, since it has been dedicated to I/O.)
Anyway, in my system, it makes a dramatic difference in streaming applications, high transactional apps, and low level scans.
RAID5 and RAID0 will both help with reads, RAID0 will help with writes, but RAID5 will be a penalty for writes (have to write the data in multiple places). RAID implementations are primarily for data protection. The real problem is the inherent difference between silicon speeds vs. mechanical spinning disks. Even with a little cache on the disk or controller, and multiple platters and heads, this differential is simply too great.
That's why if you buy a storage sub-system from EMC, IBM, or HDS, they will have a dedicated controller with dedicated cache. Obviously out of range for most of us.
The good news is that a company called Datacore has a product called UpTempo which uses part of your system memory and dedicates it to I/O. It uses a predictive algorithm to read ahead and pull blocks off your HDD ahead of time so you only have to read out of memory as opposed to off the disk.
I use it and it really works, but *only* when the disk is the bottleneck (which in modern systems is a large percentage of the time). When there is no storage bottleneck, there is no gain, but also no penalty. (Although I guess you could argue that point since some of the system memory which would normally be available is no longer, since it has been dedicated to I/O.)
Anyway, in my system, it makes a dramatic difference in streaming applications, high transactional apps, and low level scans.
Hi, I've read all the articles etc on RAID and had come to believe the opinion that there was little benefit to RAID0 generally (and increased risk of data loss) but one time I tried it out.
When I built my current PC I several trial runs etc before building the final config. The first try was Core 2 E6600, 2GB RAM and 500GB WD 5000AAKS, setup with XP etc to test all the hardware and also ran a kind of benchmark using a program called Igloo (doubt you'll find a price on the site but I believe it's around £100,000 / seat!) that a friend works with for serious financial modelling. Using a big model with small number of iterations gave a run time of, IIRC ~8 minutes. Heavy overclocking (~50% more CPU MHz) would bring it down to ~6 minutes. The test was by turns CPU and HDD limited, (and RAM quantity limited).
I also had a 300GB Maxtor Diamondmax 10 HDD to hand so I set up a ~600GB RAID0 array across both (note these are significantly different disk from different manufactures but both 7200rpm, 16MB cache). This is on in Intel ICH8 Southbridge, the first thing I noticed was the XP install was quicker. It's generally acknowledged that things need to be >10% faster for humans to notice. When we re ran the tests with Igloo the RAID array knocked 1-1.5 minutes off the run times, quite an increase in 6-8 minute runs. The upshot of the testing was that the new PCs he (and co-workers) got at work ended up specified with a Raptor as 2nd HDD to use as Igloo scratch disk
XP was noticeable more snappy with the array too, vs the single disk. Of course most of the time the system isn't disk (or anything these days) limited, but when it is disk limited its typically for seconds, you can notice 3sec vs 5sec of disk activity, mostly loading Windows, large games or other large programs. I wouldn't say it was for everyone, the added cost, complexity, risk of data loss and noise (this is SPCR!) would mean not worth it for the general public but if you're enthusiast and understand what taking on or had some specific very disk limited tasks then there is a benefit to be had.
If only I had the space and money for three more 5000AAKS and SQDs…
Note RAID 5 has poor write speed unless have very serious (read expensive) hardware RAID controller.
Regards, Seb
When I built my current PC I several trial runs etc before building the final config. The first try was Core 2 E6600, 2GB RAM and 500GB WD 5000AAKS, setup with XP etc to test all the hardware and also ran a kind of benchmark using a program called Igloo (doubt you'll find a price on the site but I believe it's around £100,000 / seat!) that a friend works with for serious financial modelling. Using a big model with small number of iterations gave a run time of, IIRC ~8 minutes. Heavy overclocking (~50% more CPU MHz) would bring it down to ~6 minutes. The test was by turns CPU and HDD limited, (and RAM quantity limited).
I also had a 300GB Maxtor Diamondmax 10 HDD to hand so I set up a ~600GB RAID0 array across both (note these are significantly different disk from different manufactures but both 7200rpm, 16MB cache). This is on in Intel ICH8 Southbridge, the first thing I noticed was the XP install was quicker. It's generally acknowledged that things need to be >10% faster for humans to notice. When we re ran the tests with Igloo the RAID array knocked 1-1.5 minutes off the run times, quite an increase in 6-8 minute runs. The upshot of the testing was that the new PCs he (and co-workers) got at work ended up specified with a Raptor as 2nd HDD to use as Igloo scratch disk
XP was noticeable more snappy with the array too, vs the single disk. Of course most of the time the system isn't disk (or anything these days) limited, but when it is disk limited its typically for seconds, you can notice 3sec vs 5sec of disk activity, mostly loading Windows, large games or other large programs. I wouldn't say it was for everyone, the added cost, complexity, risk of data loss and noise (this is SPCR!) would mean not worth it for the general public but if you're enthusiast and understand what taking on or had some specific very disk limited tasks then there is a benefit to be had.
If only I had the space and money for three more 5000AAKS and SQDs…
Note RAID 5 has poor write speed unless have very serious (read expensive) hardware RAID controller.
Regards, Seb