New build: all figured out, except system drive

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

New build: all figured out, except system drive

Post by __Miguel_ » Tue Apr 29, 2008 1:08 pm

Hi guys.

Long time SPCR reader, occasional forum scavenger when looking for info on silent stuff. I finally have a problem I can't solve on my own just by looking through the forums and the site, so I'm here to ask you for some help (I finally grew some sense and registered, too... hehehe)

So, this is the deal: I'm building a full-blown NAS. Not your "BYOD HDD case", or a pre-assembled multi-drive "thing". Those are either too dangerous on data (single drive), too small, WAAAAY too noisy (I want to be able to WORK while the thing is on...), just too under-performant, or so expensive (and still under-performant) I'd have to sell a couple of organs just to get one.

So, I started thinking about it (and saving money :lol:), and came up with these specs:

- Custom-made case (either wood or aluminum), to fit where I need it to fit;
- Full suspension on drives, to prevent case resonance;
- Motherboard: GA-G33M-DS2R (I need RAID-5, and also I want to be able to upgrate to a 10-drive config by using a RAIDCore card, if possible)
- CPU: E1200
- Cooler: probably severely undervolted stock, or an Alpine 7 (not too much space available, and I want low vibration);
- RAM: I'm starting with 2x2GB, later I'll probably try 4x2GB (depending on server load, I'll try reusing it for more than just file server duties);
- PSU: the quietest I can get my hands on;
- Storage HDDs: 753LJs (3, for starters) or 750GB GP (depending on price and availability... sub-1TB GPs are hell to get hold of here in Portugal, and contrary to the 753's, which are sub-€100, the 7500AACS are at least €115 each...)

My biggest problem will be the system drive. I want to leave the system running at all times, if possible, which means the system drive will be running most of the time (or at least it will spin-up occasionally), maybe with maintenance routines, or downloading something.

So I'm torn on the system drive to choose from (I won't install my OS on a software RAID-5 array, its performance is limited as it is...) So my options are:

- 160GB SATA WD 5400rpm 2,5'' drive (I recently had one installed on my laptop, since the previous Seagate and Hitachi drives failed, and I almost can't hear it, compared to the other two...), the drawback being it seems kind of slow performant, and chokes up really fast when multitasking - I'm affraid it will have VERY slow startup times; also, I'd loose a SATA port, which could be used to install another storage drive in the future...

- 160GB SATA/PATA WD 3,5'' drive (1600AAJS or AABS, the second one being the PATA version of the first), only I don't really know how they fare sound and vibration-wise, and also there's the SATA port issue on the AAJS; on the other hand, they're dirt cheap right now (~€50, if memory serves me right), and I don't really think I'd need more for a system drive.

- 320GB SATA/PATA Samsung 3,5'' drive (321KJ or its PATA sibling), since I know they're THE most silent drives (have two, currently) I've EVER had to this date; also very cheap, around ~€60 (actually, its price tanked on the last two weeks...)

- 500GB GP (not sure if there is a PATA version of this one), prices starting on the high €70's (usually low €80's), and with VERY limited availability.

If possible, I'd like to leave the biggest amount of SATA ports free, so I can grow the RAID-5 array as I wish. However, with some configurations, at least one of the ports is lost (and PATA drives are actually becoming more expensive than the SATA versions, not to mention getting a 2,5'' PATA drive installed on a regular IDE port is just nuts... I can't find the adapter anywhere).

Also, I'd like to be able to sleep on the same room of the server... hehehe

So, there you have it. Which drive would you recomend? Thanks in advance for any help.

Cheers.

Miguel

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Wed Apr 30, 2008 6:56 am

If performance is important for the system drive and you don't do RAID 1 or RAID 10 on the system volume I'd recommend a WD6400AAKS if you go SATA or WD4000AAKB if you absolutely decide you need PATA.

If price is more important I see no reason to avoid the 160GB WD if you are going to suspend it.

If you have a few hours to kill I'd recommend reading the content and links in viewtopic.php?p=388987.

You should seriously avoid RAID 5 when you get up in the multi terabyte range. Rebuild periods can get into days with enough drives and you could lose the whole array if a second drive failed during a rebuild.

Also consider that if you do RAID 1 or RAID 10 there is no reason not to use part of the array(s) for your boot volume and you can then avoid using a PATA drive.

If you don't mind the forced partitioning you can add Green Power drives in Pairs as RAID 1 or in Quads as RAID 10.

If you put them all in at one time you can stripe up 10 drives into a single RAID 10 array. If the motherboard won't do RAID 10 you just make 5 RAID 1 arrays in the RAID controller and then let the OS stripe them.

If you really don't want to do RAID 1 or RAID 10 you seriously need to look into a RAID controller that can do RAID 6. It just isn't worth it nowadays to mess with RAID 5 on large arrays.

Oh and if by some reason you decide to do RAID 5 anyway. Consider splitting it up into multiple RAID 5 arrays instead of just growing into a 10 disk RAID 5 array. At least if you do two 5 disk arrays you don't have to wait as long on rebuilding if one drive fails as you would if you had a 10 disk RAID 5. Heck if you are doing it on the cheap you could do the boot volume on RAID 1 and have two 4 disk RAID 5 arrays.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Thu May 01, 2008 2:02 am

First of all, sorry for the delay. I was away for the whole afternoon yesterday, I got home late, and I didn't have the time (or clarity of mind) to go through all that stuff.

I must say it was a very good read. Some stuff I already knew from my other endeavours (that is, a thread about RAID-5 back on XS, where I first started discussing how I would implement this thing), some are new and scared me s***less... :( :? I mean, the more I read about RAID-5, the more I am affraid to use it... And I simply can't afford a controller card right now (don't think I'll ever do), much less a RAID-6 hardware-based one (€400+ is like 2/3 to 1/2 of my budget...).

Unfortunately, I do have two :? about RAID-10... First, the insane disk space availability hit... fixed 50% is rather harsh... Second, I don't know how it would be implemented...

Let me explain: since there are in fact two levels, one can actually have two configurations (for a 4-drive system), one with a pair of RAID-1 arrays on RAID-0, other with a pair of RAID-0 arrays on RAID-1, the difference being the first one should (at least theoretically) handle a two-drive loss (one on each RAID-1 array), while the second one will survive only one drive being lost, and with longer rebuild times.

I don't know how Intel implemented "RAID 10", so I worry about that (or if a 6-drive RAID-10 array or 3 RAID-1 arrays at the same time are even possible, for that matter). Also, I don't know if it would be a good idea to go "single-array" with RAID-10 (meaning, just get me 4 equal drives, and use the same array with a boot partition and data partition).

RAID-10 would open up a few possibilities, though, since I could use a 780G-based board instead, if the answers to the above questions are satisfying (it's a file server, I'd be fine with just one of the new low-power dual-core Athlons, that would probably cost me the same as the Intel build, and have lower power consumptions).

So, more input on RAID-10 is appreciated. I think I'd be fine if I could simply have as many RAID-1 arrays as pairs of SATA ports there are on the controllers (meaning I'd leave one pair available for the boot RAID-1, or possibly an e-SATA drive). That's probably the best idea, too :P

Appart from that, performance on the system drive is not all that important. I just don't want to feel the sluggishness I feel when KAV is running on my laptop (the "concede resources" option is a joke, I guess... lol). If the array is able to get bigger by just adding drives, I think I'll also be fine. Probably the best way to do this is just using the Matrix Storage Manager to create RAID-1 arrays, and then adding them to a RAID-0 array (not sure if it's possible to go from a simple dynamic drive to a RAID-0 array on W2K3/W2K8, though, but I think it's possible to expand a RAID-0 array).

Now for the disk suggestions you made:

1) The only 320GB/platter drives available right now here in Portugal are the 750GB and 1TB Samsung F1s, so the 6400AAKS is a no go, unfortunately. I'd like to see how much one of those would cost, though... I might be tempted... :roll:

2) The 4000AAKB is also not available, apparently... Just great, every good drive is a no go... :evil:

Also, while googling (and checking the ISM help file) a lot while writing this reply, it seems 1) Matrix Storage only supports one RAID-1 array, or a single RAID-10 array (that is weird...), which, btw, seems to handle a two-drive failure; 2) RAID-10 is not available in Windows 2003, so I guess in a worst-case scenario I'd have to go AHCI-only on the motherboard controller (which would open up a few more options) and multi-array software RAID using Windows 2K3 (not all that bad, I suppose, but I do loose some performance RAID-10 brings...

Also, as a last note, I'll mostly need sequential reads out of the storage arrays, with occasional sequential writes, and very few random writes and/or reads.

Any aditional help will be welcome.

Cheers.

Miguel

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Thu May 01, 2008 6:07 am

__Miguel_ wrote:
I must say it was a very good read. Some stuff I already knew from my other endeavors (that is, a thread about RAID-5 back on XS, where I first started discussing how I would implement this thing), some are new and scared me s***less... :( :? I mean, the more I read about RAID-5, the more I am afraid to use it... And I simply can't afford a controller card right now (don't think I'll ever do), much less a RAID-6 hardware-based one (€400+ is like 2/3 to 1/2 of my budget...).
Thanks for the compliment, honestly the RAID5 stuff is supposed to scare you. RAID 0 is a major increased risk of data loss that most people see up front. RAID 5 is an increased risk of data loss that most people ignore because a single drive failure sounds easier to work around than it actually is when you have performance concerns or TBs of data.
Unfortunately, I do have two :? about RAID-10... First, the insane disk space availability hit... fixed 50% is rather harsh... Second, I don't know how it would be implemented...

Let me explain: since there are in fact two levels, one can actually have two configurations (for a 4-drive system), one with a pair of RAID-1 arrays on RAID-0, other with a pair of RAID-0 arrays on RAID-1, the difference being the first one should (at least theoretically) handle a two-drive loss (one on each RAID-1 array), while the second one will survive only one drive being lost, and with longer rebuild times.
Most people build two RAID 1 arrays and then stripe them into RAID 10. When done the other way it is usually called RAID 0+1 not RAID 10 to avoid confusion. There can be reasons to use RAID 0+1 but 9x% of the time RAID 10 is better all around so I usually don't even discuss the 0+1 configuration.
I don't know how Intel implemented "RAID 10", so I worry about that (or if a 6-drive RAID-10 array or 3 RAID-1 arrays at the same time are even possible, for that matter). Also, I don't know if it would be a good idea to go "single-array" with RAID-10 (meaning, just get me 4 equal drives, and use the same array with a boot partition and data partition).
You have to assume "RAID 10" is mirrors first, stripes on top of that for redundancy.

Let me tell you about a scenario I recently did and see if it clears it up for you.

I had a box with 6 hard drives. 2 medium speed 73GB drives, 1 medium speed 18GB drive, 3 slow 9GB drives.

I tried setting it up with two 73GB drives in RAID 1 for drive C:, the 18GB drive and one 9GB drive in RAID 1 for Drive E:, and two 9GB drives in RAID 1 for a higher drive letter. Quick testing showed:

array 1
random access 8.5ms
AVG Read 39.6 MB/s
Burst 68.1 MB/s

array 2
random access 7.8ms
AVG Read 47.7 MB/s
Burst 63.9 MB/s

Too slow so I reboot and try again

Next I try telling the raid controller to make a RAID 10 using the 4 biggest drives it automatically created the pairs and gave me something like a 25GB volume (I don't remember the exact size but it wasn't 2x9 like I expected and it wasn't 73+9 either). I'm assuming it paired a 73 and the 18, then paired a 73 and a 9, then striped the two pairs.

I tested that config and then rebooted and tried creating a RAID 10 using all 6 drives. It automatically gave me a 30+ GB volume but the performance was identical to the 4 disk version

Array 1
Random access 7.8ms
AVG Read 42.3 MB/s
Burst 60.4 MB/s

So the good news is you can do RAID 10 over drives that aren't even powers of two (4, 6, 8, 10 are all valid drive sizes for RAID 10 arrays).
The raid controller will do the thinking for you on making a RAID 10 if it knows how. If it doesn't you can instead do this:

1. Go into the RAID controller and make all your RAID 1 arrays
2. Install your OS
3. Convert all your disks to dynamic disks
4. Create the stripe (RAID 0) in the OS across however many drives you see in windows.

The down side is I don't think you can change the size of the array without wiping the drives and starting over but I'm not sure. It'd be worth testing the issue before you put data on the drives. I suppose I could test it for you considering I have a box with 6 drives.

Oh and for the record I figured out that the 9GB drives were dog slow so my final configuration was:

Volume 1: RAID 1 using the two 73GB drives but only using the first 20GB
Volume 2: 4.2 GB volume on the 18GB drive
Volume 3: RAID 0 using the three 9GB drives to make a 4.2 GB volume.

When I got into windows I did this:

C: = Volume 1 (using the 73GB drives)
E: = RAID 1 by way of mirroring Volume 2 and Volume 3.

So E: in this case isn't a traditional RAID 0+1 it is a Frankenstein RAID made up of a single drive plus a 3 drive RAID 0. It's exactly the kind of configuration you wouldn't want in any normal usage but it shows the flexibility of RAID and this is the result:

array 1 (RAID 1)
random access 6.9ms
AVG Read 52.9 MB/s
Burst 69.1 MB/s

array 2+3 (RAID 1 containing (Drive + RAID0))
random access 5.8ms
AVG Read 53.2 MB/s
Burst 67.4 MB/s

It's not pretty but its very very old equipment. Any drive you buy nowdays would blow it out of the water.
RAID-10 would open up a few possibilities, though, since I could use a 780G-based board instead, if the answers to the above questions are satisfying (it's a file server, I'd be fine with just one of the new low-power dual-core Athlons, that would probably cost me the same as the Intel build, and have lower power consumptions).

So, more input on RAID-10 is appreciated. I think I'd be fine if I could simply have as many RAID-1 arrays as pairs of SATA ports there are on the controllers (meaning I'd leave one pair available for the boot RAID-1, or possibly an e-SATA drive). That's probably the best idea, too :P

Apart from that, performance on the system drive is not all that important. I just don't want to feel the sluggishness I feel when KAV is running on my laptop (the "concede resources" option is a joke, I guess... lol). If the array is able to get bigger by just adding drives, I think I'll also be fine. Probably the best way to do this is just using the Matrix Storage Manager to create RAID-1 arrays, and then adding them to a RAID-0 array (not sure if it's possible to go from a simple dynamic drive to a RAID-0 array on W2K3/W2K8, though, but I think it's possible to expand a RAID-0 array).
You can most definitely use windows to do RAID 1 or RAID 0 on top of your "hardware RAID". You do so in quite a few ways too. If you think 50% space loss is harsh try making two RAID 1 arrays in the RAID controller then turning those two into a single RAID 1 array in windows. 75% space loss at your service but you could literally lose 3 of the 4 drives and not lose data.

http://en.wikipedia.org/wiki/Nested_RAID_levels will give you some pics to help visualize it.

Just realize that you need to test before you trust irreplaceable data to your arrays. You still need to make backups and assume that any array may go down at some point during the year(s) that you plan to use it. Don't just make the array(s) and fill them with data assuming it will stay there forever.
Now for the disk suggestions you made:

1) The only 320GB/platter drives available right now here in Portugal are the 750GB and 1TB Samsung F1s, so the 6400AAKS is a no go, unfortunately. I'd like to see how much one of those would cost, though... I might be tempted... :roll:

2) The 4000AAKB is also not available, apparently... Just great, every good drive is a no go... :evil:
actually the WD4000AAKB was just a random drive I picked off the WD website it isn't anything special. It is a 7200 RPM, 16MB cache, PATA drive which puts it in the mainstream. The Caviar SE16 WD3200AAKB listed on newegg or other sites would be just as fine its just that it's a closeout model no longer listed on the WD marketing pages. Probably 20 other drive models that would fit in that rough price/performance category if you just have to have an IDE boot drive.
2) RAID-10 is not available in Windows 2003, so I guess in a worst-case scenario I'd have to go AHCI-only on the motherboard controller (which would open up a few more options) and multi-array software RAID using Windows 2K3 (not all that bad, I suppose, but I do loose some performance RAID-10 brings...
Windows won't do the RAID 10 on its own but a combination of hardware RAID + software RAID can do it.
Also, as a last note, I'll mostly need sequential reads out of the storage arrays, with occasional sequential writes, and very few random writes and/or reads.

Any additional help will be welcome.

Cheers.

Miguel
Luckily your projected usage patterns are favorable, most any RAID array will handle that type of use. In fact 250GB per platter drives like in the Green Power series might be ideal if the price is right. I'd love to use nothing but the latest tech but lets be realistic price is a factor for most everyone.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Thu May 01, 2008 6:14 am

oh and for the record I'm all for you using the 780G motherboard and greenpower drives for the array.

I like AMD.

I doubt performance will be a concern if you have more than a couple of drives involved.

I know I've given you information overload. I hope you find most of it helpful.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Thu May 01, 2008 7:11 am

Well, for honesty's sake, I must say that I have been around for a while, and nested RAID levels are not really new to me (at least theoretically, as with so many other computer-related stuff... lol). That's why many things you say aren't that much of an information overload as they could have been (thankfully!). However, they do still make me think about them.

I know about the problems about RAID0. Cool to think about, but unless you have a serious reason to use them, you're better off.

RAID5 reliability, on the other hand, is news to me. I wouldn't be building insanely large arrays (10-disk arrays using consumer hardware is just asking for it, and besides Windows doesn't like partitions above 2TB anyways...), and rebuild time would be more or less equal to the time needed to write one full disk, plus parity calculations (or so I read on those links), meaning with 2TB (tops) arrays I shouldn't be all that bad, right? Nevertheless, it is still ugly, and RAID1 does sound a lot faster to rebuild (direct copy only...).

RAID6 is WAAAAY out of my league, unfortunately. 4-port controllers are way too expensive, 8-port ones will get me an eviction notice from my parents... lol

As for RAID10 and 0+1, yes, that was exactly what I was referring to. Sometimes it's not clear on the controller which one of them it is... That's what I meant with "implementation".

From the documentation on Intel Storage Manager, I found out that, at least apparently, only one RAID1 array can be present in the system, unless you go RAID10, which btw is limited to a 4-drive configuration and seems to be "real" RAID10, not 0+1.

So that leaves me with three options:

1) Fork out another €120 for an aditional 750GB GP (besides the system drive), go RAID10 with the four, and use the two remaining SATA ports as software RAID1, or system+eSATA port (converter needed, and not a bad idea for the server, actually);

2) Fork out the same aditional €120 for the 750GB GP, loose the system drive and go "single-array, two-partition" (not sure it's a wise choice because of noise and/or performance...);

3) Loose a drive (eventually move to the 780G platform), keep the system drive, maybe get a couple of 1TB drives instead of 750GB (probably only F1s, because of the insane ~€180 a pop the GP goes), go software RAID1 for each pair of drives I buy (easier migration in the future, at least...), and go multi-array (which can be nice, I can power-down unused arrays if needed be).

As availability, here in Portugal I can get any 750GB+ drive (F1 or GP), as well as last-generation sub-750GB drives. No 320GB/platter options available yet on lower end :(.

Prices:
- 1TB GP - €180~€200 each
- 750GB GP - ~€120 each
- 1TB F1 - €140~€230 each (I know, crazy...)
- 750GB F1 (753LJ) - €95~€110 each
- 320GB Samsung (321KJ) - <€70
- 160GB WD 3,5'' (1600AAJS) - <€55
- 160GB WD 2,5'' - <€60

As for prices on the CPU+Mobo combo, Intel is actually winning on the price range here - ~€85+~€40 (DS2R+E1200) versus ~€81+~€60 (MA78GM+45W X2), probably not on power consumption, though...

With this in mind, how should I go for it? I'm really stumped since the whole "RAID5 hammer" thing... :S

Cheers.

Miguel

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sun May 04, 2008 1:59 pm

I'm not against RAID 5 for all uses I just highly recommend against it, especially windows software based raid 5.

Considering your pricing I see no reason to pay 50% more for a 1TB Green Power drive than you would for a 750GB Green Power. Considering the power draw is so much lower for a Green Power than it is for the Samsung 750GB F1 I think it should be obvious that the Western Digital drive is the better choice for a file server that won't be performance bound.

780G or Intel you will have a choice on the first 4 drives as to doing two RAID 1 arrays or one RAID 10 array. Either way you'll only need about 50GB of the first array for the C: drive.

I'm not sure why you want 4GB ram on a file server. Are you running apps on the server or is it literally just a share point? What OS are we talking about here, Windows XP, Vista, Linux?

Assuming little or no app usage and 4GB ram you could disable the swapfile/virtual memory. If you want it to be on you can create a small partition on one of the other RAID 1 arrays for the swap file.

If your swapfile exists and is put on one of the first 4 drives I'm not sure how often or if at all the first 4 drives would spin down.

If you get past using 4 drives then you'll want a hardware based raid controller and at that point you can leave the first 4 drives on the MB raid as raid 1 or 10 and add a serious RAID 5.

Keep in mind the cheaper RC5210-08 won't do RAID 5. You have to go up to the RC5252-04 or RC5252-08 to do RAID 5. At that point (with a hardware RAID solution not a motherboard hardware/software combination, or windows only software RAID 5) then I have less negative points to harp on raid 5. I still would rather setup two 4 disk arrays than one 8 disk array if i had the space for the extra disks. If you don't have space for 12 disks you could create a RAID 5, transfer data from the raid 1 or RAID 10 array, then remove two or 4 drives from the motherboard raid and move them to the HW raid (most likely reinstalling OS and apps at that point). If so you could end up with a 4 disk RAID 10 with multiple volumes and a 4 disk RAID 5 for extra storage space (most likely with larger drives as prices per GB go down). It's up to your choice on HW raid controllers whether you go away from booting from the integrated RAID.

Given all that you might as well go 780G of some sort and get the RAID controller down the road when you are ready to buy a hardware raid controller and no less than 3 additional hard drives all at the same time.

Just way to many choices to be made for me to give you a set in stone roadmap. You're going to have to figure this out a few pieces at a time and adjust accordingly.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Mon May 05, 2008 1:41 pm

Ok, first of all, sorry for the delay. VERY busy weekend (lawyers seldom have free time, and I'm learning that the hard way... :?).
dhanson865 wrote:Considering your pricing I see no reason to pay 50% more for a 1TB Green Power drive than you would for a 750GB Green Power. Considering the power draw is so much lower for a Green Power than it is for the Samsung 750GB F1 I think it should be obvious that the Western Digital drive is the better choice for a file server that won't be performance bound.
Yes, you're right about the 750GB GP, definetely a better option than the 1TB GP drives (unless I'm able to get them for the same price difference to the 750GB->1TB Samsung F1s, of course...). However, those 1TB F1s are SOOO tempting... I mean, that is REALLY cheap for 1TB storage.
dhanson865 wrote:780G or Intel you will have a choice on the first 4 drives as to doing two RAID 1 arrays or one RAID 10 array.
Well, as far as I've read, it honestly seems to me at least the ICHxR southbridges support only one RAID1 array or one RAID10 array... Something about drive backups, or something... I found it extremely weird, though, since RAID10 assumes the existance of two RAID1 arrays... :roll: Oh, well, it's Intel, I don't even know why I still try it to make sense of it... Also, RAID10 doesn't allow a 6-drive configuration (that's reserved for RAID5 configurations and ICH8R/9R/10R). I think at least RAID1 to RAID10 migration is possible, so I shouldn't be too bad on that account...

Also, I don't know how the SB700 behaves this way... I've checked the 78GM (Gigabyte's mATX 780G mobo) manual, but I've only managed to conclude there are two sets of SATA ports: four of them are just like the SB600 ports, and the extra two (in case of Giga's board, one SATA and the eSATA ports) are independent. The two sets can be configured as RAID/AHCI/IDE independently, so I don't really know what to expect from this southbridge... If I can migrate RAID1 to RAID10, or get two or three sets of RAID1, I'll be fine with this board, too.
dhanson865 wrote:I'm not sure why you want 4GB ram on a file server. Are you running apps on the server or is it literally just a share point? What OS are we talking about here, Windows XP, Vista, Linux?
It will definetely run Windows... I find Linux WAAAAAY too complicated (a common occurence on long-time, in-depth users of Windows, it seems... hehe). As for the 4GB, it will mostly be because of the extra stuff I want to get running, like WSUS and uTorrent (I have to keep it busy, right? :P)

Swap file has never been a problem for me. If I go anything larger than 4GB, I'll drop it altogether. Until then, and since I've never experienced severe drawbacks with the swap file being on the system drive, I'll let it stay there.

That leads me to the reason why I'm most likely keep the system drive appart from the data drives. If I go RAID10, keeping the system partition on the RAID array will keep the drives spinning constantly, which is not a very good thing on my book... The extra power and noise is never good... And if I can manage two RAID1 arrays, I'll loose the ability to go RAID10 because of just that. So, I'll probably go single system-drive, even if that means getting a pair less of data drives: my ears and my parent's wallet will probably appreciate it... hehe
dhanson865 wrote:If you get past using 4 drives then you'll want a hardware based raid controller and at that point you can leave the first 4 drives on the MB raid as raid 1 or 10 and add a serious RAID 5.
Hmmm, more than 4 data drives seems a little too much right now. But if/when I get there, it will probably still be safer (and cheaper - and yes, I'm that cheap... lol) to stick with RAID10, in groups of 4 drives. Seems much easier to maintain and manage.
dhanson865 wrote:Keep in mind the cheaper RC5210-08 won't do RAID 5. You have to go up to the RC5252-04 or RC5252-08 to do RAID 5. At that point (with a hardware RAID solution not a motherboard hardware/software combination, or windows only software RAID 5) then I have less negative points to harp on raid 5.
Yes, I know about the "lesser" RAIDCore card. However, do keep in mind that, even if RAIDCore cards are very good controllers (the 4000 series kicked a lot of more expensive cards' a**es), they are still FakeRAID controllers... True, they do use the system resourses much better (very true in RAID5 scenarios), but they're still software-based... That's why you don't see RAID6 (which actually seems good offsetting RAID5's disadvantages) in it, the performance would simply be abysmal (unless probably on a Quad-core - or Octa-core - system...). It still seems a good option as an extra RAID10 controller. Shame the 780G mobo will kill that for me, since there's only a 1x PCI-E slot available (and I don't really know if non-PEG cards are accepted on the 16x slot on that motherboard... The G33M was specially chosen because of that (and the RAID controller, of course).

So, adding all that up, I still need to know how the ICH9R and SB700 behave regarding RAID1 and RAID10 arrays (multi-arrays and migration abilities), else I'll have to go multi-array software RAID1 (I don't think I can fork out the money for 4 750GB drives right now to go directly RAID10). Appart from that, I only need to know if 780G accepts non-PEG cards. If it does, and depending on the migration abilities of the southbridges, I might well go 4+1+780G, or 2+1+780G... Seems a good choice, too :D Do you agree?

Oh, wait... But that way I STILL need to know what drive to get as the system drive... LOL Do you know anything in line with the price and (absense of) noise of the 321KJ (I don't really know the 1600AAJS...)?

Cheers.

Miguel



P.S.: Curious how I've changed my mind on the RAID configuration... Not what I had in mind, but probably it won't be a bad idea.

Cryoburner
Posts: 160
Joined: Sat Dec 01, 2007 4:25 am

Post by Cryoburner » Mon May 05, 2008 6:37 pm

__Miguel_ wrote: - RAM: I'm starting with 2x2GB, later I'll probably try 4x2GB (depending on server load, I'll try reusing it for more than just file server duties);
What version of Windows are you planning on using? If you intend to use a 32 bit OS, be aware that 4GB will be the maximum amount of RAM that you'll be able to use. The amount of memory your system can access will actually be less than this, since the top of that 4GB address space will be reserved for your video card and other components. You might only be able to access around 3GB of installed memory. There's a pretty good explanation of this here, and a Microsoft help document about it here. You'll need to use a 64 bit version of Windows or Linux to access a full 4GB or more. You probably don't actually need that much memory anyway, unless you're going to be editing video or working with huge files in Photoshop or something.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Tue May 06, 2008 2:37 am

Cryoburner wrote:
__Miguel_ wrote: - RAM: I'm starting with 2x2GB, later I'll probably try 4x2GB (depending on server load, I'll try reusing it for more than just file server duties);
What version of Windows are you planning on using? If you intend to use a 32 bit OS, be aware that 4GB will be the maximum amount of RAM that you'll be able to use. The amount of memory your system can access will actually be less than this, since the top of that 4GB address space will be reserved for your video card and other components. You might only be able to access around 3GB of installed memory. There's a pretty good explanation of this here, and a Microsoft help document about it here. You'll need to use a 64 bit version of Windows or Linux to access a full 4GB or more. You probably don't actually need that much memory anyway, unless you're going to be editing video or working with huge files in Photoshop or something.
Thank you for the reminder about the 32bit address space limitation. I'm going 64bit, no doubt there (besides, XP x64 is even based on 2K3 x64, which is always a good thing when it comes to memory management and stability... hehehe).

As for the need for 4GB of RAM, it really depends on server usage patterns... I currently have a router/storage/WSUS server on another location which is actually going as high as 900MB memory usage just by sitting there idling - WSUS (well, more like the database server alongside it, and the IIS worker processes...) is an INSANE memory hog. uTorrent also tends to be a memory hog - that is, if you don't want to have your HDD always trashing around reading and writing...

I do have to agree that 8GB seems a bit overkill, but 4GB, on the other hand, and considering the experience I have with my other server, does not.

Cheers.

Miguel

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Tue May 06, 2008 6:22 am

You mean they charge that much for a fakeraid add in card? Wow, I just glossed over it being host based raid.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Tue May 06, 2008 8:17 am

dhanson865 wrote:You mean they charge that much for a fakeraid add in card? Wow, I just glossed over it being host based raid.
Well, after convertion to €, and for the 4-port PCI-E 4x card, I can't find anything really better on the price front...

Also, the RAIDCore cards are SAS-ready, which other FakeRAID cards aren't, which increases costs - also the PCI-E interface.

The strong point on the RAIDCore cards really is the better (or at least not so limited) XOR offloading to the Host CPU (only thing missing is the cached writes, really... the performance is much like the "bigger" cards. Of course, CPU usage is high, though it is still manageable (25% on an Opti for the 4000 series, I assume the 5000 series is about the same).

Cheers.

Miguel

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Fri May 09, 2008 2:08 pm

Start with a proper RAID card. Forget the motherboard RAID controller. The only thing I'd use it for is RAID-1 or RAID-0 by itself.

If you want to do RAIDCore on the cheap, pick up a BC4852 from EBay. They go for $90-$180. The only salient difference between that card and the $500 BC5252-08 is that the latter uses PCIe. You'd have to find a decent PCI-X motherboard for the BC4852, or be satisfied with a maximum array throughput of 75 MB/s.

There's nothing scary about RAID-5, provided you recognize that the purpose of RAID is to provide constant data availability, not a backup. Yes, there's a risk a drive could die during a rebuild. That's why you limit the array size to 8 drives or less and keep disconnected backups of the important data.

I'd be far less inclined to deal with any sort of RAID 0+1 configuration. You're only going to get 30-70 MB/s of bandwidth over a gigabit network connection; RAID-0 just adds another layer of complexity with no practical speed benefit. I'd far sooner span two RAID-1 arrays than stripe them. If one set of RAID-1 drives dies in array that spans two RAID-1 sets, you'd still retain half your data. If the two RAID-1 arrays were striped, you'd lose everything.

There are ways around the 2 TB limit. Within 32-bit Windows, you need only convert to dynamic disks. For 64-bit Windows, you can convert to GPT, which has no size limit.

Finally:
Yes, I know about the "lesser" RAIDCore card. However, do keep in mind that, even if RAIDCore cards are very good controllers (the 4000 series kicked a lot of more expensive cards' a**es), they are still FakeRAID controllers... True, they do use the system resourses much better (very true in RAID5 scenarios), but they're still software-based... That's why you don't see RAID6 (which actually seems good offsetting RAID5's disadvantages) in it, the performance would simply be abysmal (unless probably on a Quad-core - or Octa-core - system...).

The strong point on the RAIDCore cards really is the better (or at least not so limited) XOR offloading to the Host CPU (only thing missing is the cached writes, really... the performance is much like the "bigger" cards. Of course, CPU usage is high, though it is still manageable (25% on an Opti for the 4000 series, I assume the 5000 series is about the same).
Have you even used one of these cards? The BC4852 supports cached writes; it'll use 4 GB of system RAM for write cache in my 5 GB system for sustained 500MB/s+ transfers. CPU use is 20% or less on an X2 4000.

"Fake RAID" was specifically in reference to the exact sort of motherboard controller you propose using. The advantage of the RAIDCore architecture is that it's a coupled to extremely flexible software controls. The documentation is the best I've seen, as is Ciprico's support. If you think a $150 780G is equivalent to a $500 dedicated card, you're welcome to waste your time with it.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Sat May 10, 2008 2:18 am

KnightRT wrote:Start with a proper RAID card. Forget the motherboard RAID controller. The only thing I'd use it for is RAID-1 or RAID-0 by itself.
As much as I'd like to do that, I simply can't. "Proper" RAID controllers are insanely expensive, and I just don't have the money for it...
KnightRT wrote:If you want to do RAIDCore on the cheap, pick up a BC4852 from EBay. They go for $90-$180. The only salient difference between that card and the $500 BC5252-08 is that the latter uses PCIe. You'd have to find a decent PCI-X motherboard for the BC4852, or be satisfied with a maximum array throughput of 75 MB/s.
That's exactly why I'd like the BC5252-04 (the -08 is a major overkill for me, there's no way I'm having to handle more than 8 drives, and most motherboards have at least four ports - for those non-critical, non-speed-dependent files...). It's somewhat cheap (around $200, including the VST Pro software), and it seems to handle the same features as the bigger 8-port brother. Also, if I go with the DS2R, I'll have the perfect PCI-E 4x port for it, so no bandwith limitations there...
KnightRT wrote:I'd be far less inclined to deal with any sort of RAID 0+1 configuration. You're only going to get 30-70 MB/s of bandwidth over a gigabit network connection; RAID-0 just adds another layer of complexity with no practical speed benefit. I'd far sooner span two RAID-1 arrays than stripe them. If one set of RAID-1 drives dies in array that spans two RAID-1 sets, you'd still retain half your data. If the two RAID-1 arrays were striped, you'd lose everything.
Another thing to think about. I'm actually still considering RAID 10 or multiple RAID1 arrays, especially because multiple RAID1 will let me power down drive pairs to save power. Something not so easy doable with a more complex array...
KnightRT wrote:There are ways around the 2 TB limit. Within 32-bit Windows, you need only convert to dynamic disks. For 64-bit Windows, you can convert to GPT, which has no size limit.
Good to know. Not sure if I'll need it anytime soon, but it's still good to know :D
KnightRT wrote:Have you even used one of these cards? The BC4852 supports cached writes; it'll use 4 GB of system RAM for write cache in my 5 GB system for sustained 500MB/s+ transfers. CPU use is 20% or less on an X2 4000.

"Fake RAID" was specifically in reference to the exact sort of motherboard controller you propose using. The advantage of the RAIDCore architecture is that it's a coupled to extremely flexible software controls.
While I haven't ever used a RAIDCore card, I really don't see where you're getting at. True, Fake RAID was especially coined to fit the kind of RAID controllers embedded on most of today's motherboards. But that term is also used for ANY controller without a dedicated parity engine on-board, and since RAIDCore cards, AFAIK (please do correct me if I'm wrong on this one), don't have on-board dedicated parity engines, the term still applies to it...

Either way, I never said the cards were lousy, on the contrary ("the 4000 series kicked a lot of more expensive cards' a**es"), and that's why I find them so appealing. It's software RAID done the right way, using all the system resourses to the max while still keeping the costs down because the lack of the on-board parity engine. You telling me it can actually use the system memory to support cached writes, something I didn't knew about and had never read about being available with RAIDCore cards (an UPS becomes an instant must for that system... wouldn't want to get caught in a power shortage with 4GB of important data on the RAM... lol) makes them even more of a great product. The 5000 series also supports cached writes, right?

I did, however, say probably RAID6 is rather harsh on today's CPUs... AFAIK, RAID6 parity calculations are extremely taxing on any CPU, even dedicated ones, so it rather is expectable for it not to be available on generic CPUs, which would probably be swamped with parity calculations. Unless, of course, you go the "quick hack" way... Just store two copies of the same RAID5 parity (which you wouldn't need re-calculating) on two different drives, and be done with it... hehehe
KnightRT wrote:If you think a $150 780G is equivalent to a $500 dedicated card, you're welcome to waste your time with it.
Again, I never said that. Simple logic dictates it, and reviews I've read would make sure I wouldn't miss that one.

It all comes down to how much performance I'm able to buy. Right now, not much more than integrated RAID - which for one instantly excludes RAID5, unless I want to write at ~5MBps random, tops, which is not a pretty sight. I know I won't have stellar performances, but I don't really need more than two or three concurrent data streams, and that's only for video and audio duties, not some monstruous video rendering/production machine.

I'll try to leverage something better than integrated (either now or down the road), but there IS a limit on how much my budget can stretch... That's why I'm making so many compromises (believe me, if there's one place I don't like to be cheap is with PC hardware...).

Anyway, thank you for the input.

Cheers.

Miguel

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Sun May 11, 2008 11:49 am

While I haven't ever used a RAIDCore card, I really don't see where you're getting at. True, Fake RAID was especially coined to fit the kind of RAID controllers embedded on most of today's motherboards. But that term is also used for ANY controller without a dedicated parity engine on-board, and since RAIDCore cards, AFAIK (please do correct me if I'm wrong on this one), don't have on-board dedicated parity engines, the term still applies to it...
"Fake RAID" is as much a performance classification as it is a description of how the device works. The phrase is specifically associated with slow and unreliable motherboard controllers that add RAID functionality through a heavily bottlenecked driver addition. This is not the RAIDCore approach. Both depend on the main CPU for XOR calculations, but that's about the only common factor. Phrased otherwise, RAIDCore may be software-based, but it surely isn't "fake."
I did, however, say probably RAID6 is rather harsh on today's CPUs... AFAIK, RAID6 parity calculations are extremely taxing on any CPU, even dedicated ones, so it rather is expectable for it not to be available on generic CPUs, which would probably be swamped with parity calculations.
I doubt it.

http://tweakers.net/reviews/557/27/comp ... na-27.html

The RAID-6 Areca cards have been around for years. The difference in write performance between that and RAID-5 is none too dramatic. I can only speculate why that functionality isn't already part of RAIDCore.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Sun May 11, 2008 12:46 pm

KnightRT wrote:"Fake RAID" is as much a performance classification as it is a description of how the device works. The phrase is specifically associated with slow and unreliable motherboard controllers that add RAID functionality through a heavily bottlenecked driver addition. This is not the RAIDCore approach. Both depend on the main CPU for XOR calculations, but that's about the only common factor. Phrased otherwise, RAIDCore may be software-based, but it surely isn't "fake."
Ok, finally settled regarding definitions and term uses. I was using a fairly broader definition for the term "Fake RAID", which is the one I've always have seen used for any kind of software-based RAID controllers (including not only the southbridge-based ones, but also the ones on low-end add-on cards).

But you're right, RAIDCore is nothing like any other software-based RAID controller - to be honest, it's been a short while since I first heard about them (back when the VST 2008 was news on Tom's Hardware/Anandtech).
KnightRT wrote:I doubt it.

http://tweakers.net/reviews/557/27/comp ... na-27.html

The RAID-6 Areca cards have been around for years. The difference in write performance between that and RAID-5 is none too dramatic. I can only speculate why that functionality isn't already part of RAIDCore.
Well, while those results can be partially explained by the "loss" of an aditional drive to parity (when compared to RAID5), some of the performance difference is still attributable to increased parity complexity. RAID6 NEEDS two parity calculations instead of one, and at least one more write cicle to complete, when comparing to RAID5.

You can have dedicated CPUs optimized to calculate that parity (something I believe has been happening since the first time RAID6 was implemented in hardware, and is known to have happened in GPUs, for one - remember some Radeon cards performing better with 32bit color instead of 16bit?) to a point the difference on performance comes only from the extra writes overhead. That's not so easy with generic CPUs, that's why it probably hasn't appeared on RAIDCore drivers/software... If RAID5 takes 20% CPU time on a dual core machine, and if RAID6 only takes 30%, some people might think "Will clients be happy knowing at least 30% of the machine's computing power will be used only for the storage controller?" Might be wiser not to go there just now, and wait for faster CPUs, or more cores...

That being said, I hope we'll see RAID6 in RAIDCore controllers sooner rather than later. Even if there's a warning stating "RAID6 requires a quad-core or 3GHz+ dual core machine to have good performance results".

Oh, one more thing... The new 5000 series (with the VST software) are able to bridge with the ICH8/9R controllers, the VST also being able to work on its own with those southbridges... Do you think the VST software will open up more speed from the standard southbridge? It does make sense, since it's software RAID (change the driver, and bam, better functionalities and even speed); also, I don't think it would be very wise to have an enterprise-level controller card that actually looses performance when coupled with standard hardware present in many enterprise-level motherboards (VST is also compatible with the server versions of ICH8/9R). Any data on that?

Now, for the more important stuff: how can I get me a cheap PCI-E RAIDCore card? :P The more I talk (and read) about it, the more I want to get one... But the nearest reseller I have (Spain) doesn't deal with the RAIDCore cards, only the other Ciprico products... :(

Again, thank you for the help you have been giving me. I'm trying to start getting quotes for the hardware. I'd still like to know your thoughts on a good and quiet system drive, if I go that way.

Cheers.

Miguel

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Sun May 11, 2008 8:16 pm

__Miguel_ wrote: Oh, one more thing... The new 5000 series (with the VST software) are able to bridge with the ICH8/9R controllers, the VST also being able to work on its own with those southbridges... Do you think the VST software will open up more speed from the standard southbridge? ... Any data on that?
I wasn't aware of VST until you pointed it out. Ciprico's documentation gives the impression that there is no performance loss (as one would expect, given that the 4000 series were just SATA controllers), provided the motherboard SATA link is high-bandwidth.

It looks like an efficient way to save money. You'd have to be careful with your motherboard choice; 6-port ICHXR boards are easily obtainable, but 8-port boards and higher often use a different controller for the remaining ports that's incompatible with the VST software. Even so, 6 drives in RAID-5 is still an excellent array, and compatible motherboards are as little as $60.

The $200 5252-04 and VST package here is also competitive:

http://ciprico.biz/catalog.asp?PCA=442

I'd buy it now if I'd had the foresight to choose an appropriate motherboard. I have no idea what the relative prices are in Spain.

As to system drives, there are only three sources of noise:

1) Idle
2) Seek
3) Vibration that causes case resonance

Some drives that are excellent at the first two (Samsung HD501LJ 500 GB) aren't so hot for the last unless suspended, or at least soft-mounted. Drive suspension solves a lot of ills. I've been impressed with the aforementioned Samsungs, WD's GreenPower series, and my 74 GB Raptor. Not so impressed with the WD5000AAKS. A proper case (Antec Solo or equivalent suspension-modified steel) helps.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Mon May 12, 2008 12:45 am

KnightRT wrote:I wasn't aware of VST until you pointed it out. Ciprico's documentation gives the impression that there is no performance loss (as one would expect, given that the 4000 series were just SATA controllers), provided the motherboard SATA link is high-bandwidth.
That does make sense... I'm only worried about one thing: it seems the motherboard needs to support "boot from RAIDCore"... I don't know what that means... Is it a reference to "Int 13" support? Also, do the drives installed on the standard ports on the motherboard (with VST installed) become bootable only if the motherboard can boot from the RAIDCore card? That's what bothers me most about it...
KnightRT wrote:It looks like an efficient way to save money. You'd have to be careful with your motherboard choice; 6-port ICHXR boards are easily obtainable, but 8-port boards and higher often use a different controller for the remaining ports that's incompatible with the VST software. Even so, 6 drives in RAID-5 is still an excellent array, and compatible motherboards are as little as $60.
Exactly. That's why my first choice was the GA-G33M-DS2R, which btw seems compatible with booting from the card. It has also got an extra 4x PCI-E port, so I'd be able to upgrade in the future to a 10-port or 14-port machine...
KnightRT wrote:The $200 5252-04 and VST package here is also competitive:

http://ciprico.biz/catalog.asp?PCA=442

I'd buy it now if I'd had the foresight to choose an appropriate motherboard. I have no idea what the relative prices are in Spain.
Yes, I know that price is competitive. But I can't find it anywhere in Portugal (Spain has the nearest Ciprico representative, but I'm not from there) :cry: Which sucks, btw... Maybe I need to ask Ciprico directly, because that REALLY seems like a great (and dirt cheap) option.
KnightRT wrote:As to system drives, there are only three sources of noise:

1) Idle
2) Seek
3) Vibration that causes case resonance

Some drives that are excellent at the first two (Samsung HD501LJ 500 GB) aren't so hot for the last unless suspended, or at least soft-mounted. Drive suspension solves a lot of ills. I've been impressed with the aforementioned Samsungs, WD's GreenPower series, and my 74 GB Raptor. Not so impressed with the WD5000AAKS. A proper case (Antec Solo or equivalent suspension-modified steel) helps.
Well, no suspension is not an option. I'm not risking having vibration noises anywhere. After all, I'll need to sleep on the same room as the server hehehe

Hearing about the 501LJ is interesting, because I have a great relationship with the younger sibling, the 321KJ. Since I basically can't hear any noise from the 321s appart from vibration, and the 501 is practically the same thing, that seems a good choice. Cool! :D

Now another thing: how will 5400rpm and 7200rpm drives working in tandem (not on the same array, of course, but at the same time) influence noise and vibration (even when suspended)? Can weird resonance effects be expected?

Cheers.

Miguel

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Mon May 12, 2008 1:23 pm

__Miguel_ wrote: how will 5400rpm and 7200rpm drives working in tandem (not on the same array, of course, but at the same time) influence noise and vibration (even when suspended)?
When suspended, it doesn't matter how fast the drives spin. When mounted... it still doesn't matter, if my GP and HD501LJ drives are any indication.

__Miguel_
Posts: 140
Joined: Tue Apr 29, 2008 7:54 am
Location: Braga, Portugal

Post by __Miguel_ » Mon May 12, 2008 1:49 pm

KnightRT wrote:When suspended, it doesn't matter how fast the drives spin. When mounted... it still doesn't matter, if my GP and HD501LJ drives are any indication.
That's great news, thanks. I was actually worried about that one.

Also, today I've sent an e-mail to Ciprico's sales address. Since the nearest Cirprico reseller is in Spain, and RAIDCore controllers aren't available there, I didn't have a chance. I sure hope they can get me a quote for a 4-port card + VST.

In the meanwhile, I'll keep getting quotes (and actually start buying stuff... the G33M-DS2R is being discontinued, I'll have to wait a few months untill something that good comes along again if I can't get this one...)

More input is always apreciated, though.

Cheers.

Miguel

KnightRT
Posts: 100
Joined: Sun Nov 21, 2004 11:13 pm

Post by KnightRT » Thu May 15, 2008 12:01 pm

I sent this message to Ciprico support:

I also recently discovered your VST 2008 software package. This is highly interesting, but I wonder: is there any performance difference between VST 2008 and the BC4852 running the native 3.3 drivers? And could I transfer an array from BC4852 to the motherboard controllers and VST without losing the data?

Their response:

Since VST 2008 and RAIDCore use host CPU cycles to process, there should be no major differences in performance. VST may help the performance. A lot has to do with system board and CPU types.

As for your question about transferring data over to VST, there should be no issues if you are at V3.3 code for the BC4852. An important factor that could cause some issues though would be if the arrays were created with a code version older that V3.X. If this system has been in production for some time this may be the case. It is always recommended that when making changes to your storage that a valid backup of important data be taken in case you run into unexpected issues.

Post Reply