64x2 Home Server recommendations?

Got a shopping cart of parts that you want opinions on? Get advice from members on your planned or existing system (or upgrade).

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

64x2 Home Server recommendations?

Post by plympton » Tue Aug 30, 2005 9:16 am

I'm starting to do research on replacing my 2-year old Dual Athlon server - it uses 400-500 watts at idle. Ick.

I'd like to get an Athlon64x2 machine, but keep it as low power (no fancy graphics cards - it's going to run Linux in the basement).

I'll probably replace my 5 73 GB SCSI RAID-5 drives with 3 250 GB SATA-II drives in a RAID-5, too, hopefully reducing power.

Any motherboard recommendations that do 64x2 with C'n'Q?
Any good case recommendations for a server - I seem to enter this thing more often than I should. :-).

Any other advice? I was thinking of possibly using one of those CoolerMaster 120mm Water Cooling things, since they're under $100 these days, but I suppose for a basement server, just a honkin' huge case with lots of airflow would work just as well...

(Oh, and I'm going to be a student again, so cost is a concern, of course. :-))

Thanks!

elg2001
Posts: 24
Joined: Fri Jul 22, 2005 4:54 pm

Post by elg2001 » Tue Aug 30, 2005 9:41 am

I would guess an asus board because it supports both CnQ and QFan to keep noise down (if it matters).

As for a case, ill be building a server with 12 hard drives so this was by far the best case i found:

http://www.newegg.com/Product/Product.a ... 6811123079

Newegg sells the exact same case sold by different manufacturers under the server case section for over $600. All hard drive bays are cooled by 120mm fans (one fan per 4 drives).

How about keeping the hard drives you have now and using linux power management software to spin down the drives when they're not being used? With 12 drives ill have to do that. I'll be using software RAID-5 because that is the fastest and one of the most well tested solutions. Linux software RAID-5 is faster than hardware raid5 in every single review ive ever seen. Assuming your cpu has a few cycles to spare. Calculating XOR is not hard for modern processors to do.

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

Did some more hunting...

Post by plympton » Tue Aug 30, 2005 10:14 am

The motherboard is a tough one - getting modern stuff, but not having to pay the "Gamer Doode" tax - I'd be really happy with Rage XL graphics!

But it looks like you can't get SATA-II *AND* on-board graphics. The closest I found is this Abit board, http://www.newegg.com/Product/Product.a ... 6813127222, but I'm a bit wary of Abit since my Dual Celeron days.. not particularly stable!

As for stopping my drives, I'm using Hardware Raid-5, so that's a bit tough. I'd probably go software if I did it again - much easier to manage, ironically! (2 years ago that wasn't the case!). Can't really control the HW RAID from Linux too well, so I just let 'em rip. Not so sure they would, as I serve quite a few websites at the moment, and have lots of hourly automated backups running.

That Chenbro case looks nice - I was gandering at it. Seems solid, and I'm a fan of plastic thingys that help you pull fans our, route wires, etc.

We're talking a few $$$, though.. yikes! With the drives & the processor, it's starting to add up!

sthayashi
*Lifetime Patron*
Posts: 3214
Joined: Wed Nov 12, 2003 10:06 am
Location: Pittsburgh, PA

Post by sthayashi » Tue Aug 30, 2005 7:16 pm

Only advice I have to give is to settle for a cheap-ass graphics card. Built-graphics will be difficult to find along side decent server level boards that use Athlon X2 and not Opteron.

If you're willing to use Opteron or Intel processors, have a look at Tyan and Supermicro boards.

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Tue Aug 30, 2005 7:56 pm

elg2001 wrote:I would guess an asus board because it supports both CnQ and QFan to keep noise down (if it matters).

As for a case, ill be building a server with 12 hard drives so this was by far the best case i found:

http://www.newegg.com/Product/Product.a ... 6811123079

Newegg sells the exact same case sold by different manufacturers under the server case section for over $600. All hard drive bays are cooled by 120mm fans (one fan per 4 drives).

How about keeping the hard drives you have now and using linux power management software to spin down the drives when they're not being used? With 12 drives ill have to do that. I'll be using software RAID-5 because that is the fastest and one of the most well tested solutions. Linux software RAID-5 is faster than hardware raid5 in every single review ive ever seen. Assuming your cpu has a few cycles to spare. Calculating XOR is not hard for modern processors to do.
Yes, the Chenbro SR107! I actually just bought one of those (for a basement server as well). For what it's worth, I bought the black one from serversdirect.com. The base price is a bit more than newegg, but it's got free shipping.

Anyway, that case only holds eight hard drives (unless you're going to use the optical drive bays, but even then, I think you could only get 11 drives in there).

For what it's worth, the case is nice. But despite it's size, it's actually a bit cramped to work in, at least in the hard drive area.

And, unless you swap the fans, don't expect anything near quiet. It comes stock with three Delta (!) 120mm fans. Yup, these are jet engines! But, if you're going to pack the drive bays completely full, you'll need a lot of air movement, and I don't believe typical SPCR-friendly fans will work here---they just won't move enough air.

I think I might be coming off as negative, but that's certainly not my intent. I did a bit of server case research before buying that one. Unless you're willing to spend some really big bucks, I couldn't find a better case for anywhere near the price. (I definately don't regret buying it.)

And, I remember reading about this, but forgot until the case actually showed up: there's room for a 92mm in front of each hard drive cage. You have to provide the 92mm fans. But I think it's almost needed to have a high-speed 92mm fan pushing air, and the stock 120mm high-speed fan pulling to get sufficient airflow across those hard drives. They're just packed in there so close, there's not much room for airflow (that I can see).

Now, if you only planned to have a "typical" setup, i.e. one or two drives tops, this case would be a great slient case! Of course, you'd have to replace all the fans (I don't think undervolting would even save these beasts). I suggested that the case was a bit cramped above, but I should clarify by saying that it's only cramped when you have many drives in there. Again, with a "normal" workstation-type setup, you would have great airflow through this case (even with low-speed fans).

Oh, one more gotcha: see my earlier post. There is a plate that separates the PSU "chamber" from the motherboard area. The motherboard standoffs are too darn close to that plate to use a heatsink that overhangs the motherboard at all. The SPCR favorite Zalman CNPS7000 simply won't fit (at least not without some hacking/bending).

The other thing I should note: this is my first experience building with a "real" server case. The last several computers I've built have been for typical applications, and they've all gone in the Antec 3700 (or 3000B). So I've grown accustomed to having plenty of room around all components. I have a hunch that, for most server cases, getting as many components in the case as possible takes priority over copious amounts of free space for increased airflow.

Anyway... I'd definately recommend the SR107. I doubt you'll find a "true" server case with the Chenbro's features at anywhere near that price point. (And if you do, don't tell me---I'll just have to kick myself! :) )

VERiON
Posts: 233
Joined: Wed Aug 11, 2004 5:42 am
Location: EU

Post by VERiON » Wed Aug 31, 2005 4:30 am

Check out CoolerMaster STACKER case. It is full tower case with drive bays form bottom to the top. Not very cheap, but also not overpriced.

nick705
Posts: 1162
Joined: Tue Mar 23, 2004 3:26 pm
Location: UK

Re: 64x2 Home Server recommendations?

Post by nick705 » Wed Aug 31, 2005 6:45 am

plympton wrote: just a honkin' huge case with lots of airflow
Another vote for the CM Stacker, that describes it perfectly in one line... :lol:

You can fit in up to twelve HDDs if you buy an extra two of the 4-into-3 thingies with the integrated 120mm fan, and still have room for optical drives. There's also oodles of space to work with inside, which you'd need with twelve drives plus all the cabling...

IsaacKuo
Posts: 1705
Joined: Fri Jan 23, 2004 7:50 am
Location: Baton Rouge, Louisiana

Post by IsaacKuo » Wed Aug 31, 2005 6:48 am

Just curious--what sort of server are you running, that you need more than one processor? For reducing power consumption, obviously one processor is better than two (if it is sufficient computing power).

My own closet server is only a file server, so I just use a cheap VIA mobo/processor. The "case" is just a crate. The four hard drives are placed in a simple DIY "wind tunnel" (two hard drives, then an 80mm fan, then two more hard drives). All components are easily accessable, I just lift the lid off of the crate.

The original poster needs only 3 hard drives, so he doesn't need to get fancy with a zillion drive bays.

One last note--watercooling? In a server?

Gholam
Posts: 155
Joined: Mon Apr 26, 2004 10:09 am
Location: Israel
Contact:

Post by Gholam » Wed Aug 31, 2005 11:12 am

At work, our main server (running Exchange 2003 along with two databases, plus fileserver) is an Athlon64 X2 4400+ on a Tyan Tomcat K8E board in a Compucase 6A19. Tomcat K8E is nForce4 Ultra based, ATI RageXL onboard graphics, dual GbE LAN (nForce4 and a Broadcom NetXtreme), 2 serial ports and a parallel, firewire, etc.

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

Post by plympton » Wed Aug 31, 2005 3:21 pm

IsaacKuo wrote:Just curious--what sort of server are you running, that you need more than one processor? For reducing power consumption, obviously one processor is better than two (if it is sufficient computing power).

One last note--watercooling? In a server?
The current server is a multi-use server:
Personal file server (400+ gigs on RAID-5, mainly photos)
Public Webserver (about 10 low-traffic sites)
Personal Webserver for Photos - dynamic scaling of files, that's what uses the CPU on and off, as well as memory bandwidth I don't have enough of
Mail server for myself and friends e-mail. I don't trust anyone.

I could go with a single, but having a dual for so long, I'd hate to give it up! I might reconsider, though... doesn't look like Socket 939 Cool-n-Quiet is quite ready for prime time. I was about set to get a Asus A8V (?) Socket 754 board, since it had a lot of nice built-in stuff and C'n'Q, but then the whole Dual Core thing just exploded on the scene...

-Dan

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

Post by plympton » Wed Aug 31, 2005 3:30 pm

Gholam wrote:At work, our main server (running Exchange 2003 along with two databases, plus fileserver) is an Athlon64 X2 4400+ on a Tyan Tomcat K8E board in a Compucase 6A19. Tomcat K8E is nForce4 Ultra based, ATI RageXL onboard graphics, dual GbE LAN (nForce4 and a Broadcom NetXtreme), 2 serial ports and a parallel, firewire, etc.
Do you know what the idle power-use is on the Dual-Core processors? I want to enable Cool'n'Quiet to have lots of power "on-tap" but run as low as possible most of the time.

-Dan

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Post by matt_garman » Wed Aug 31, 2005 5:03 pm

Gholam wrote:At work, our main server (running Exchange 2003 along with two databases, plus fileserver) is an Athlon64 X2 4400+ on a Tyan Tomcat K8E board in a Compucase 6A19. Tomcat K8E is nForce4 Ultra based, ATI RageXL onboard graphics, dual GbE LAN (nForce4 and a Broadcom NetXtreme), 2 serial ports and a parallel, firewire, etc.
Hmmm... Notice that Tyan motherboard has an nforce4 chipset, but comes stock with a passive cooler? (at least according to the pictures at Newegg)

Why can't more mobo makers do this?

Matt

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

System so far (didn't get the X2... yet)

Post by plympton » Wed Sep 07, 2005 7:25 pm

Gholam wrote:At work, our main server (running Exchange 2003 along with two databases, plus fileserver) is an Athlon64 X2 4400+ on a Tyan Tomcat K8E board in a Compucase 6A19. Tomcat K8E is nForce4 Ultra based, ATI RageXL onboard graphics, dual GbE LAN (nForce4 and a Broadcom NetXtreme), 2 serial ports and a parallel, firewire, etc.
Gholam - any idea what those servers use in terms of power when idle? Do you enable Cool'n'Quiet?

I picked up an Asus A8V-Deluxe + a Venice 3000+ the other day. Fedora Core 4 has been a PITA to get fully installed - up2date just STOPS downloading after a while. AARGH!!! Anyway, after restart after restart after restart, there's a good chance I'll chuck the Asus board for the Tyan. I like Tyan (Tiger MP, Tiger MPX). Not the most featured, but very reliable.

Anyway, this Asus setup is very power frugal:
Asus A8V Deluxe
Venice 3000+ running just shy of 2 GHz (220 FSB)
2 GB PC3200 Crucial RAM
Ati 7000 PCI Video Card
Seagate 250 GB HD (7200.8)
LG DVD+/-RW drive
S12-430 Power Supply
Belkin 350va UPS I had laying around (!!)

All running between 70-80 watts while doing the up2date stuff and file copying. Cool'n'Quiet is enabled - when the CPU is going, it heads north of 99 watts on the meter. Haven't hit 100 yet. :-)

-Dan

~El~Jefe~
Friend of SPCR
Posts: 2887
Joined: Mon Feb 28, 2005 4:21 pm
Location: New York City zzzz
Contact:

Post by ~El~Jefe~ » Wed Sep 07, 2005 8:34 pm

cant' beat tyan. it's built for people who dont expect their computers to ever behave strangely.

i think they are pricey though.

opterons are inflated as well.

12 hd's is a tall order :)

nice and quiet!!
:roll:

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

Update: It's only a single now. :-)

Post by plympton » Sat Sep 10, 2005 8:31 am

Here's an update for my low-power (not necessarily silent, but sure quieter than the previous one) home server.

I got the components locally at Enu (http://www.enuinc.com). Nice outfit, limited but not bad selection of parts, and prices ended up same as Newegg (PS was $22 less than Newegg).

I got:
Asus A8V Deluxe motherboard - uses AGP for low-power graphics card
- Socket 939, 4 RAM Slots, Cool'n'Quiet
- Passive NB Chip (VIA runs cooler than nForce?)
- VIA better supported in Linux (??)
Athlon64 3000+ - decided to see if a single was "good enough"
- lowest power, cheapest (as I'm "at liberty" these days.. :-))
- came with a cooler + fan.. will use for now
Seagate 7200.8 250 GB SATA HD
- Native SATA, has NCQ (though my MB SATA doesn't use it)
- Lowest noise from StorageReview.com
- Relatively low heat + power usage + decent performance
- 5 Year Warranty
Seasonic S12-430 PS ($78!)
- Best balance of efficiency + price ("Fanless with fan" was > $170)
- Really quiet & efficient at low power draws
- Has the 8-pin connector for the Tyan motherboard that I still could get..

I had:
4 512 MB PC3200 sticks from Crucial
DVD-ROM + Floppy Drive
250 GB Maxtor IDE drive ("/backups")
Belkin 350VA UPS (laying around.. "wots dis?")
A few 80 mm fans

I found:
A local group here does PC Recycling (Hey, I'm in Portland :-)), and I was able to rummage through the stacks of HP and Dell toss-aways and find a really nice, large(r), ATX(ish) steel case that was pretty clean and dent-free for a whopping $5!! Cleaned it out, and looks great! It's got a nice MB tray that slides out (necessary, I found... :-(), and enough space to mount fans, etc, and the cable management was awesome for what could have been a pretty cramped case! Price couldn't be beat, too! :-)

So, all told, after installing Fedora Core4, a few headaches, migration issues, I've got a PC that Idles at ~ 77 Degrees *F*, and draws a whopping 71 WATTS! It goes up to ~99 Watts under load, and 78 when the IDE drive spins up for the Cron backups (kinda worried about HD life, however.. :-))

For comparison, my current server has 5 10,000 RPM SCSI drives + 2 120 GB drives + 1 250 GB drive + 5-6 case fans + 2 430 Watt power supplies + 2 Athlon 2 GHz processors, all drawing **500 Watts** from a HUGE Apc SmartUPS.

And I don't think I really have much less performance for what I'm using it for (mail, photo, music, and file servers)

I'm a pretty happy guy, despite being sleep deprived from a week of wrangling with this (Linux Migration Issues... ugh). Photos later!

-Dan

ronjohn
Posts: 17
Joined: Thu Sep 15, 2005 8:47 am

Re: 64x2 Home Server recommendations?

Post by ronjohn » Thu Sep 15, 2005 7:30 pm

plympton wrote:I'll probably replace my 5 73 GB SCSI RAID-5 drives with 3 250 GB SATA-II drives in a RAID-5, too, hopefully reducing power.
Gah!!!!

With a 5-drive RAID set, you are "only" giving up 20% of raw capacity to RAID overhead.

With a 3-drive, you'll be wasing 33% of capacity :!: :!: :!: :!:

Not something I'd do.

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Re: 64x2 Home Server recommendations?

Post by matt_garman » Thu Sep 15, 2005 8:29 pm

ronjohn wrote:With a 5-drive RAID set, you are "only" giving up 20% of raw capacity to RAID overhead.

With a 3-drive, you'll be wasing 33% of capacity.

Not something I'd do.
I've been planning to build a RAID-5 file server in the "as-soon-as-I-have-the-money" future.

With my limited understanding of RAID-5, the total capacity is (basically) the sum total of all drives minus one. That "minus one" drive is used for error checking.

Now I thought I saw somewhere that "minus one" drive has to have at least the capacity of any of the other drives in the array.

Do I remember that correctly, and if so, why is that?

The point you make above is that the fewer drives you have with RAID-5, the lower your capacity "efficiency". And that is consistent with what I recall reading.

But why, then, isn't there a simple "RAID-5" factor? Why must the "minus one" drive be at least as big any of the other drives?

In other words, why isn't that "minus one" drive's size proportional to the sum total of all other drives?

Anyone know the details here?

Thanks!
Matt

ronjohn
Posts: 17
Joined: Thu Sep 15, 2005 8:47 am

Re: 64x2 Home Server recommendations?

Post by ronjohn » Thu Sep 15, 2005 9:39 pm

matt_garman wrote:
ronjohn wrote:With a 5-drive RAID set, you are "only" giving up 20% of raw capacity to RAID overhead.

With a 3-drive, you'll be wasing 33% of capacity.

Not something I'd do.
I've been planning to build a RAID-5 file server in the "as-soon-as-I-have-the-money" future.
Why?
  • Coolness factor? Understandable.
  • Need for scads of continuous disk space? Get a 400GB drive. The GB/$ is much better, and you only use 1 3.5" slot.
  • Feeling of safety? You're not as safe as you think, since the likelihood is much greater that you accidentally delete/overwrite the file.
matt_garman wrote:With my limited understanding of RAID-5, the total capacity is (basically) the sum total of all drives minus one. That "minus one" drive is used for error checking.

Now I thought I saw somewhere that "minus one" drive has to have at least the capacity of any of the other drives in the array.

Do I remember that correctly, and if so, why is that?
Yes. See below for the answer.
matt_garman wrote:The point you make above is that the fewer drives you have with RAID-5, the lower your capacity "efficiency". And that is consistent with what I recall reading.

But why, then, isn't there a simple "RAID-5" factor? Why must the "minus one" drive be at least as big any of the other drives?

In other words, why isn't that "minus one" drive's size proportional to the sum total of all other drives?

Anyone know the details here?
The mathematics of the error correction code (ECC) indicate that for all the corresponding bits of raw data (i.e., sector 200,000 on each of the 4 "data" disks in a 5-disk RAID-5 system), there must be some place to put the ECC bits. In this example, that would be sector 200,000 of disk 5.

Note that when all the ECC bits are stored on the last disk, that is RAID-4, which isn't used anymore. In RAID-5, the ECC sectors are interspersed among all the drives (there is data and ECC on all drives. The benefit is that the RAID overhead is spread across all the drives, instead of being focused on 1 drive.

Clear as mud?

matt_garman
*Lifetime Patron*
Posts: 541
Joined: Sun Jan 04, 2004 11:35 am
Location: Chicago, Ill., USA
Contact:

Re: 64x2 Home Server recommendations?

Post by matt_garman » Fri Sep 16, 2005 6:30 am

ronjohn wrote:
matt_garman wrote:I've been planning to build a RAID-5 file server in the "as-soon-as-I-have-the-money" future.
Why?
  • Coolness factor? Understandable.
That's a little bit of it :)
ronjohn wrote:
  • Need for scads of continuous disk space? Get a 400GB drive. The GB/$ is much better, and you only use 1 3.5" slot.
  • Feeling of safety? You're not as safe as you think, since the likelihood is much greater that you accidentally delete/overwrite the file.
Yes and yes. I want such a vast amount of storage that I can't find an affordable backup system (I know, shame on me). So I'm thinking that the next best thing is to use RAID to minimize the chance of data loss.

For what it's worth, my "file server" will serve two main purposes: be a data store for a video-on-demand system, and be a live backup for my "real" data. My "real" data is small enough that I can back it up to (data) DVD. So the vast majority of the store will be for storing copies of my (video) DVDs.

Also, doesn't RAID-5 have a feature where you can add a drive, and it's automatically "assimilated" into the array collective? E.g., say your RAID-5 array is 3x250 GB drives, so you have 500 GB storage. Can't I just add in another 250 GB drive, and now be looking at 750 GB of storage?
ronjohn wrote:The mathematics of the error correction code (ECC) indicate that for all the corresponding bits of raw data (i.e., sector 200,000 on each of the 4 "data" disks in a 5-disk RAID-5 system), there must be some place to put the ECC bits. In this example, that would be sector 200,000 of disk 5.

Note that when all the ECC bits are stored on the last disk, that is RAID-4, which isn't used anymore. In RAID-5, the ECC sectors are interspersed among all the drives (there is data and ECC on all drives. The benefit is that the RAID overhead is spread across all the drives, instead of being focused on 1 drive.

Clear as mud?
More "mud" than "clear" unfortunately :) I think I need to read up on the mathematics of the ECC codes to get a better understanding. (My intuition says that the number of ECC bits would always be constant for a number of data bits, e.g. 20 bytes of data always takes 5 bytes of ECC. But it looks like it's not as simple as that!)

Take care,
Matt

ronjohn
Posts: 17
Joined: Thu Sep 15, 2005 8:47 am

Re: 64x2 Home Server recommendations?

Post by ronjohn » Fri Sep 16, 2005 10:28 am

matt_garman wrote:
ronjohn wrote:
  • Need for scads of continuous disk space? Get a 400GB drive. The GB/$ is much better, and you only use 1 3.5" slot.
  • Feeling of safety? You're not as safe as you think, since the likelihood is much greater that you accidentally delete/overwrite the file.
Yes and yes. I want such a vast amount of storage that I can't find an affordable backup system (I know, shame on me). So I'm thinking that the next best thing is to use RAID to minimize the chance of data loss.
That's called manifestation of geek testosterone. Been there, done that. In different forms, of course. When I was in college, 30MB drives were $1000.
matt_garman wrote:For what it's worth, my "file server" will serve two main purposes: be a data store for a video-on-demand system, and be a live backup for my "real" data. My "real" data is small enough that I can back it up to (data) DVD. So the vast majority of the store will be for storing copies of my (video) DVDs.
If the server will just contain copies of your DVDs, why need RAID-5?
matt_garman wrote:Also, doesn't RAID-5 have a feature where you can add a drive, and it's automatically "assimilated" into the array collective? E.g., say your RAID-5 array is 3x250 GB drives, so you have 500 GB storage. Can't I just add in another 250 GB drive, and now be looking at 750 GB of storage?
Note the big iron RAID systems I work with. Maybe PC systems are different. I'd be really leery about it, though, with my data.
matt_garman wrote:
ronjohn wrote:The mathematics of the error correction code (ECC) indicate that for all the corresponding bits of raw data (i.e., sector 200,000 on each of the 4 <snip>
all drives. The benefit is that the RAID overhead is spread across all the drives, instead of being focused on 1 drive.

Clear as mud?
More "mud" than "clear" unfortunately :)
Sorry. I better explaining history than math.
matt_garman wrote:I think I need to read up on the mathematics of the ECC codes to get a better understanding. (My intuition says that the number of ECC bits would always be constant for a number of data bits, e.g. 20 bytes of data always takes 5 bytes of ECC. But it looks like it's not as simple as that!)
No need to know the math. Want is, of course, different.
http://www.adaptec.com/worldwide/produc ... on_of_raid

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

Re: 64x2 Home Server recommendations?

Post by plympton » Fri Sep 16, 2005 10:51 am

Also, doesn't RAID-5 have a feature where you can add a drive, and it's automatically "assimilated" into the array collective? E.g., say your RAID-5 array is 3x250 GB drives, so you have 500 GB storage. Can't I just add in another 250 GB drive, and now be looking at 750 GB of storage?
Nope, not with most RAID's - you have to rebuild them. There might be some software to help, but I haven't seen any. Even rebuilding the RAID after you pull a drive isn't always automatic - you have to "pause" it from writing any more, and then you have ot initiate a rebuild.

-Dan

ronjohn
Posts: 17
Joined: Thu Sep 15, 2005 8:47 am

Re: 64x2 Home Server recommendations?

Post by ronjohn » Fri Sep 16, 2005 12:00 pm

plympton wrote:Even rebuilding the RAID after you pull a drive isn't always automatic - you have to "pause" it from writing any more, and then you have ot initiate a rebuild.
I've never had to do that before.

But then, our RAID controllers cost $50,000 and have a GB of cache RAM.

elg2001
Posts: 24
Joined: Fri Jul 22, 2005 4:54 pm

Post by elg2001 » Fri Sep 16, 2005 12:23 pm

LOL $50,000

wow thats impressive. do they come with a free midget to maintain the card for you?

elg2001
Posts: 24
Joined: Fri Jul 22, 2005 4:54 pm

Post by elg2001 » Fri Sep 16, 2005 12:32 pm

by the way, you CAN fit 12 drives in the Chenbro case. 8 in the fan-cooled stock locations. one in each 5.25" bay. one in the vertical floppy bay. yes its a tight fit but it would work i think.

obviously, since you already have the case, if it doesnt work that way please let me know before i buy it :)

ronjohn
Posts: 17
Joined: Thu Sep 15, 2005 8:47 am

Post by ronjohn » Fri Sep 16, 2005 1:03 pm

elg2001 wrote:LOL $50,000

wow thats impressive. do they come with a free midget to maintain the card for you?
Not a card, but they are 1U controllers. Embedded computers dedicated to RAID. A SCSI cable then connects it to a SCSI card in computer.

The SCSI card then sees it as one big device.

Hifriday
Patron of SPCR
Posts: 237
Joined: Thu Aug 05, 2004 3:32 pm

Post by Hifriday » Fri Sep 16, 2005 8:08 pm

Gholam wrote:At work, our main server (running Exchange 2003 along with two databases, plus fileserver) is an Athlon64 X2 4400+ on a Tyan Tomcat K8E board in a Compucase 6A19. Tomcat K8E is nForce4 Ultra based, ATI RageXL onboard graphics, dual GbE LAN (nForce4 and a Broadcom NetXtreme), 2 serial ports and a parallel, firewire, etc.
Hi Gholam,
Did you/your company put together the server or was it purchased prebuilt? Also how's the performance and roughly how many users is it serving?

MattHelm
Posts: 41
Joined: Fri Aug 27, 2004 5:38 pm
Location: Chicago, IL

Re: 64x2 Home Server recommendations?

Post by MattHelm » Fri Sep 23, 2005 10:33 am

Most of the high end new cards (64 bit PCI and PCI-E) will let you add a drive without anything more than jumping into the setup, telling it to add a drive, but at a guess, all it does is include some special rebuilding software. The LINUX software RAID will also let you do this.

plympton
Posts: 229
Joined: Sun Mar 14, 2004 11:40 am

Pixtures of my new system (insides)

Post by plympton » Mon Sep 26, 2005 3:12 pm

Hey all,

I finally got the pictures of my new server posted - only 4 of the 6 fans are actually hooked up - PSU, CPU, Side Panel CPU, and Front Intake. Running Cool 'n Quiet on Fedora Core 4, idling at 1000 MHz (I decided not to up the Bus speed any, since it's a server), it's running about 72 Degrees F. and the entire setup (including UPS, Router, DSL model, 1 Gig-E and 1 10/100 Switch) is running 95 Watts total, down from about 500 watts before.

I'm pretty sure I could get away from at least 1 if not 2 of the fans, but I'm leaving them in for redundancy at the moment.

It was an awesome case for $5. Sure wish all the other components cost like that! :-)

http://www.freepdx.com/link.php?id=379

Pretty cool! (pun intended. :-))

-Dan

Post Reply