Quiet Diskless Folding@Home

A forum just for SPCR's folding team... by request.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Sat Jan 03, 2004 9:56 pm

Neil,

The Biostar is the first whose network adapter has given me any problem. My number one beef is with unstable/flakey BIOSes. My number two beef is with some board layout issues and some missing features that would have added only pennies to the price, like an LED showing the board has power, and a "buzzer" for signaling POST codes.

It's interesting that some of the NForce boards use Realtek network controllers. I did not know that. I wonder why?

David

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Sat Jan 03, 2004 10:32 pm

I just realized that my Shuttle XPC Linux server makes more noise than my four blades combined. The XPC is on the floor to my left, the four blades on a card-table to my right. If anything the blades are closer, and nearer to ear level, so are at a disadvantage, and yet the primary sound I hear is from the XPC. The "character" of the sound is definitely different. The sounds from the farm are "smooth" and even soothing. The sound from the XPC has an edge to it. I suspect it's (1) the sound of the old IBM DeskStar hard drive, and (2) turbulence from the heatsink fan which is blowing through that heatpipe "radiator".

David

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Wed Jan 07, 2004 2:06 pm

My 5th "blade" arrived today. I decided to buy everything from one vendor this time to minimize shipping charges, and except for the board and the cpu, I went ultra cheap on everything else.

Motherboard: Asus A7V8X-MX micro-ATX. I will have more to say after I get it hooked up tonight. Here are a few first impressions:
  • This is one PLAIN looking motherboard. I don't know what it means, but there are fewer capacitors (?) around the cpu than my other micro-ATX boards, and capacitors are scattered all over the rest of the board in a seemingly haphazard fashion. There is a ton of open space on this board, including space for a number of components not mounted (USB headers, COM header, SATA headers, a large IC, additional capacitors). If the board works well, this means less than nothing, but the appearance is of a stripped-down budget OEM board, which is not what I am used to seeing from ASUS.
  • Has heatsink mounting holes
  • Has an LED showing the board has power - Yeah. But still nothing to signal POST codes. :( The manual lists the "beep codes" but there is no "buzzer" onboard.
  • No 4-pin 12V ATX connector such as the Biostar and Abit boards have. Not a big deal, but with the higher powered cpu's, I am more comfortable with a 12V ATX connector.
  • The manual appears to be pretty good. At least it has been edited by someone with a good command of the English language.
HSF: Speeze 5F286B. For $9, this looks like a pretty darn nice heatsink. The base is mushroom shaped to clear components around the socket, flaring out to accomodate a metal 80mm fan. The fan cable is sleeved, which is something you don't expect for $9. The fan is very quiet undervolted. Even at 12V it's not terrible; the specs state 2700 RPM. There is a round copper inlay in the base of the aluminum heatsink, about the size of a $1 coin, probably not very thick since it doesn't go all the way through the base. The surface of the base is a bit course. The clip requires a screwdriver to attach, but looks like it will do the job.

Power Supply: Generic (JGE brand) 200W micro-ATX. No "Noise Killer" but that's not a bad thing in my opinion. In return, the fan is connected via a 2-pin header, so swapping out the fan for an L1A will be trivial (no soldering). This is definitely a cheap power supply, a step down from the Fortron's, but I had to find out just how low I could go. As it turns out, not this far. This psu has some serious coil whine. That's one thing about the Forton psu's - once the overzelous fan is dealt with, they are nice and quiet - no whine or buzz.

CPU: Athlon 2600+

Memory: Generic PC2700

David

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Wed Mar 17, 2004 12:47 pm

A lot has happened since I started my farm. I have been through at least 7 or 8 motherboards now, most of which I returned, and I have a lot more experience with cooling, overclocking, and issues which seem minor until you have to deal with them day after day, namely poor placement of ATX power connectors.

I have developed some quite strong opinions on what works and what doesn't work, and why. I have tried to update the initial post, so that people coming into this thread for the first time won't get obsolete recommendations, but it's not very cohesive and subsequent posts will probably be confusing.

Despite the often rocky path, diskless blade farms are a viable option, especially with the proper selection of components. The cost is dramatically less than complete systems, day to day maintenance is less, and the amount of space used is much less than individual machines. There are downsides too, of course, but I would absolutely not do it any other way.

David

AZBrandon
Friend of SPCR
Posts: 867
Joined: Sun Mar 21, 2004 5:47 pm
Location: Phoenix, AZ

Post by AZBrandon » Thu Mar 25, 2004 1:54 pm

haysdb wrote:Despite the often rocky path, diskless blade farms are a viable option, especially with the proper selection of components. The cost is dramatically less than complete systems, day to day maintenance is less, and the amount of space used is much less than individual machines. There are downsides too, of course, but I would absolutely not do it any other way.

David
So what would you call your true per-unit cost at this point, if you were to start from scratch and set up 4 blades? By true cost, I mean including whatever it takes to mount them all up and OS (if purchased, versus downloaded linux), hubs, cables, etc. I'm just kind of curious. I've been playing with the idea too, but doing each blade actually in a case. It seems that you can do one in a case for maybe $300 or so for a 2.6Ghz super basic system diskless, booting off CD. Then just add costs for a simple hub and whatever OS would be best for a diskless farm.

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Thu Mar 25, 2004 2:31 pm

AZBrandon wrote:So what would you call your true per-unit cost at this point, if you were to start from scratch and set up 4 blades? By true cost, I mean including whatever it takes to mount them all up and OS (if purchased, versus downloaded linux), hubs, cables, etc. I'm just kind of curious. I've been playing with the idea too, but doing each blade actually in a case. It seems that you can do one in a case for maybe $300 or so for a 2.6Ghz super basic system diskless, booting off CD. Then just add costs for a simple hub and whatever OS would be best for a diskless farm.
That's a tricky question since my "true cost" would have to include all the mistakes I made. :( And are we talking about my cost for a 7-blade farm or your cost for a 4-blade farm?

7 blade Athlon "farm in a box"

If I were starting from scratch, but know what I know now...
  • $200 per blade for the basic hardware (motherboard, cpu, power supply, and heatsink)
  • $500 for a complete PC to use as a server
  • $70 for "motherboard trays" (made from 1/8" aluminum sheet)
  • $30 for the "blade box" (materials only)
  • $50 for a hub and cables
  • $xx for a copy of Linux for the server
  • $30 for fans to provide additional cooling
  • $50 for a good surge suppressor
  • $xx for incidentals (resistors, heat shrink tubing, mounting hardware, etc.)
Total outlay: ~$2200

David

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Thu Mar 25, 2004 2:34 pm

It would also be possible to use an existing PC, even one running Windows, as the server for the diskless clients, which could cut $500 off the total cost.

I have only read about using a Windows server, so I don't know what this would involve. The blades would boot Linux from the Windows server. :D

David

AZBrandon
Friend of SPCR
Posts: 867
Joined: Sun Mar 21, 2004 5:47 pm
Location: Phoenix, AZ

Post by AZBrandon » Thu Mar 25, 2004 3:31 pm

That's very impressive.. assuming you go the existing server route, it looks like that cuts $500 from your estimate, bringing the total cost for a 7-node blade cluster to $1700, or about $243 per node. Do you have any pictures, or measured dimentions of your blade enclosure? Also, have you measured it's total power draw, such as with a Kill-A-Watt or similar device?

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Thu Mar 25, 2004 4:34 pm

AZBrandon wrote:That's very impressive.. assuming you go the existing server route, it looks like that cuts $500 from your estimate, bringing the total cost for a 7-node blade cluster to $1700, or about $243 per node. Do you have any pictures, or measured dimentions of your blade enclosure? Also, have you measured it's total power draw, such as with a Kill-A-Watt or similar device?
Ah, nice of you to ask! :lol:

I expect to finish "Enclosure II" tonight (but then I have been saying THAT for about a week now). I believe that anything worth doing is worth overdoing, so I decided to replace my original box with a new-and-improved version made from MDF instead of plywood, with improved cooling features. I kind of got carried away but it has been fun and has kept me out of trouble. I will take some pictures before I load the farm into it, so you can see my fine workmanship :P and then with all the equipment. Today or tomorrow or next week sometime. :oops:

I will also plug the power strip into the Kill-A-Watt, to give a total power draw for the 7 blades (not counting the server).

Another option to a separate server would be to make one of the blades the server. I rather like this idea since (assuming identical hardware used for each blade) ANY blade could serve as the server. This would provide some backup or redundancy for the server, which is the "weak link" inasmuch as a server failure brings the entire farm down.

David

eneuman
Posts: 3
Joined: Fri Mar 26, 2004 11:18 pm
Location: Overland Park, KS

Post by eneuman » Fri Mar 26, 2004 11:46 pm

Nice setup. do you plan to do another after this one?

one solution for the server is to add a spare hdd to one of the nodes, and have the whole cluster with the server in the case.

also, have you considered running linux? jason at eocf has a nice article here: http://www.extremeoverclocking.com/arti ... arm_1.html

another idea for powering the nodes is to use one psu for two nodes. here:http://forums.extremeoverclocking.com/s ... adid=33884

my in general farm building:
-biostar m7ncg 400 is the best
-barton, thouroghbred and applebred cores fold almost identicle.
-only need 128mb pc2700, and m7ncg 400 doesnt support ddr400
-power supply here:http://store.yahoo.com/amamax/psspecial2fnlp6100e.html
-use threaded rod and washers to mount motherboards.
-stock cooling is addequite to run cpu @ 2200mhz
-raid 0 on server
case/enclosure not required-aesthetic

i have four individual nodes set up like this, except i have a number of 3.3gb hdd that i run win2000pro instead of diskless

-eneuman@eocf's

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Sat Mar 27, 2004 1:32 am

eneuman,

Welcome. I know your name from the EOC forum.

I agree with all of your farm building recommendations. The Biostar M7NCG's are excellent boards. I have four of them. As you point out, they do not support DDR 400 (when using onboard video), but this is a minor issue IMO.
eneuman wrote:one solution for the server is to add a spare hdd to one of the nodes, and have the whole cluster with the server in the case.
haysdb wrote:Another option to a separate server would be to make one of the blades the server.
The blades are running Linux. LTSP. The server is running RedHat 9.

David

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Sat Mar 27, 2004 2:55 am

BTW, Jason's article was one of my sources when I put my farm together.

Do I plan to do another one? I'd like to, but that's not likely to happen in the foreseeable future. No money.

The new box is done. I want to get some pictures of just the box before I load everything into it, and then pictures of the complete "farm".

The dimensions of the box are 33(W)x11(D)x15(H) - just over 3 cubic feet. If there is a smaller 7 node farm anywhere, I'd like to see it. :D

David

eneuman
Posts: 3
Joined: Fri Mar 26, 2004 11:18 pm
Location: Overland Park, KS

Post by eneuman » Sun Mar 28, 2004 12:01 am

i feel so stupid right now. i just now recognized your name over their and then remebered your saying you were from here. :FAH: (you know what i mean) :p

AZBrandon
Friend of SPCR
Posts: 867
Joined: Sun Mar 21, 2004 5:47 pm
Location: Phoenix, AZ

Post by AZBrandon » Mon Mar 29, 2004 12:26 pm

haysdb wrote:The dimensions of the box are 33(W)x11(D)x15(H) - just over 3 cubic feet. If there is a smaller 7 node farm anywhere, I'd like to see it. :D

David
That's quite impressive! So about how many of your PPW are accounted for by the farm?

haysdb
Patron of SPCR
Posts: 2425
Joined: Fri Aug 29, 2003 11:09 pm
Location: Earth

Post by haysdb » Mon Mar 29, 2004 12:54 pm

AZBrandon wrote:
haysdb wrote:The dimensions of the box are 33(W)x11(D)x15(H) - just over 3 cubic feet. If there is a smaller 7 node farm anywhere, I'd like to see it. :D

David
That's quite impressive! So about how many of your PPW are accounted for by the farm?
On average, the 7 blades plus the server (~17.2 GHz) contribute ~5K PPW. Three P4 systems (7.5 GHz) contribute ~2K PPW.

David

Post Reply