Real-World file server CPU requirements

Our "pub" where you can post about things completely Off Topic or about non-silent PC issues.

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 12:11 pm

andyb wrote:http://uk.startech.com/Cards-Adapters/H ... PEXSAT34RH

It uses a Marvell controller, and I have used a few "Startech" products in the past, they are cheap, but appear to be well made decent products with vanilla drivers and none of them have failed.
It should do fine. Startech is a bit expensive over here, but otherwise fine. Just make sure your MB has a slot for it. An x1 slot will not work (unless you hack it, but it would still run at x1). I've usually put my RAID cards in the x16 slot normally used for video and never had an issue, but I've heard sometimes this does not work. Many current boards have a physical x16 / electrical x4 slot off the chipset which should always work.
I can and will buy it for £56 if it has your blessing, I can then get a socket 1150 motherboard and use the "fake" Intel RAID-5, the 2 spare SATA ports and the 4 on this card for my backup drives, problem solved.

At current prices that would make my entire server re-build £710.

£56 for that controller card
£52 for motherboard
£28 for 4GB DDR3
£32 for a 2.7GHz Celeron
£540 for 4x 4TB WD Red HDD's
Sounds good, but just make sure the MB actually has RAID. The lower end Intel chipsets don't have it. Assuming we are talking 1150 chipets (haswell), the Z87, H87 and Q87 have it. The H81, Q85 and B85 do not.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 12:15 pm

Just to add, there is probably nothing wrong with using the Highpoint card as just an HBA in Linux/BSD with softraid if you keep your backup drives all on the MB. With most MBs this would give you 5 backup drives (6 - boot drive), which hopefully would be enough. Just don't put all your eggs in one basket (and keep your backups up to date :-) )

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Sat Apr 12, 2014 3:01 pm

Sounds good, but just make sure the MB actually has RAID. The lower end Intel chipsets don't have it. Assuming we are talking 1150 chipets (haswell), the Z87, H87 and Q87 have it. The H81, Q85 and B85 do not.
I had quoted a B85 board, good thing you mentioned that, I had made the mistake of making an assumption that all Intel boards above the entry level one supported RAID.

That's not really an issue though as the H87 boards are not much more as most have 2x 16x PCI-E slots so that will make life quite easy as I can have the new controller AND my existing RAID card plugged in at the same time, which for the sake of interest means that I can benchmark the lot.

As for backups, I don't have any problems as I semi-automate them wherever possible.
Just to add, there is probably nothing wrong with using the Highpoint card as just an HBA in Linux/BSD with softraid if you keep your backup drives all on the MB.
I am confused, do you mean that if I just have my 6x existing drives (old 2TB drives) attached to my existing raid card with all of them set to JBOD I should be OK.? with my current controller.? If so that will save me £56.


Many many thanks, Andy

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Sat Apr 12, 2014 5:29 pm

andyb wrote:I am confused, do you mean that if I just have my 6x existing drives (old 2TB drives) attached to my existing raid card with all of them set to JBOD I should be OK.? with my current controller.? If so that will save me £56.
What I mean is this: No matter which controller you end up using, make sure *ALL* your backup drives are on a different controller. If your controller takes a dump you still have your backups. As just a JBOD the highpoint isn't too bad as most of it is bypassed and you are mainly just using the marvel chip. There are still a couple of Highpoint chips that could fail which is why I say you should not run everything on the one card.

If you can I highly suggest running a separate system for backups, but I do understand this is not always possible With WOL you can leave the backup system off 99% of the time and still automate your backups.

I have my main arrays on my LSI card and local backups on Intel and Marvell controllers (MB has an extra Marvell). Then I have a separate system that gets turned on automatically by WOL every couple of days and backed up to.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Sun Apr 13, 2014 4:22 am

What I mean is this: No matter which controller you end up using, make sure *ALL* your backup drives are on a different controller. If your controller takes a dump you still have your backups. As just a JBOD the highpoint isn't too bad as most of it is bypassed and you are mainly just using the marvel chip. There are still a couple of Highpoint chips that could fail which is why I say you should not run everything on the one card.
Well that's certainly good news for my bank balance, and I shall keep my current RAID card as a simple 8-port controller card using 6x individual drives, and the 6-port Intel chipset to run my RAID, and if anything should fail I will turn the server off until I gave a replacement controller, swap it over and carry on without risking any (unlikely) 2nd controller failure, so as to have a complete data set at all times.

I certainly appreciate the idea of having 2 separate machines, ideally with the 2nd being off-site, but I certainly cannot stretch to that.

The next thing to happen is for my house to be sold, then I will have the cash available to buy my new setup, unless the WD 5TB drives are about to come out, as they will push down the price of the 4TB Red drives by a sizable amount. Seagate's 5TB HDD is in the wild as we speak, but is currently only being shipped in Lacie external drives (Lacie are now owned by Seagate), and I hope for WD's 5TB drive to launch sometime in the next 4-6 weeks.


Thank you so much for your fantastic help, regards Andy

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Sun Apr 13, 2014 6:12 pm

@ andyb: are you using raid5 purely to achieve 100+ MB/s network I/O? Or uptime? How long are rebuilds @ 12TB?

We moved to RAID10 at work for our db server after reading Oracle's SAME paper and the unrelated but more entertaining BAARF. I don't know what you're applications are, so this may not be appropriate for your needs.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Mon Apr 14, 2014 2:50 pm

I originally went for RAID-5 a number of years ago (originally with 4x 1TB Drives) for the following reasons (in no specific order).

A single massive volume rather than several smaller volumes.
Redundancy without losing as much HDD space as RAID 0, or 10.
Better performance than a single drive.
Probably other reasons that I cannot recall ATM.

As for re-build time, that's why I bought a UPS, I cant remember for sure, but it was something like 12-hours (estimated), I cant even remember whether that was with the original 4x 1TB drives, or after I extended it to 6x 1TB drives or whether it was with the replacement 4x 2TB drives.... Either way, I could and did happily use it at the time whilst it was re-building, and I am very happy to say that it took less time to re-build that it first calculated (by about 1-hour), it didn't bat an eyelid when I had a 2nd power outage WHILST it was rebuilding (before I got my UPS) and I didn't lose a single scrap of data - it did exactly what it should have done, and for that I am thankful, as at the time I had absolutely no backup system at all (except for my most precious stuff).

Since then my data collection has spiraled out of control and I have far more data than I can backup. I have 2x 2TB backup drives, only one of which even has data backed up onto it, the other one is labelled "overflow 1", so essentially I have about 1/4 of my data backed up, some with no redundancy at all. I only turn my server on when I need to dump more data onto it, or get data off (less frequent ATM) this has as much to do with how I use it as to give as much protection to my existing data set as possible until I can afford to do a serious upgrade. Fingers crossed for the 5T-B WD Red being released soon, and for my house sale to go through ;)

As for your database, surely the best solution ATM would be to replace your RAID 10 array for your database with RAID 1 SSD's as they are infinitely faster than spinning disc's and have a much lower rate of failure. I only suggest this because you mentioned it was a database server, and therefore your (relative) amount of data should be small..... but thats another topic altogether.


Andy

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Tue Apr 15, 2014 9:30 am

andyb wrote:I originally went for RAID-5 a number of years ago (originally with 4x 1TB Drives) for the following reasons (in no specific order).

A single massive volume rather than several smaller volumes.
Redundancy without losing as much HDD space as RAID 0, or 10.
Better performance than a single drive.
I think those were its major selling points. Today LVM, drive pooling, etc. takes care of point 1; and single drive performance has improved (100MB/s sustained throughput for high-density platters last time I looked into it) taking care of point 3 in a 1Gb/s LAN.

Re: point 2 above, do you need any redundancy in a server that you only turn on to dump data to/from? RAIDs redundancy maintains availability, which is sort of moot if you're turning the server off.
andyb wrote:Since then my data collection has spiraled out of control and I have far more data than I can backup. I have 2x 2TB backup drives, only one of which even has data backed up onto it, the other one is labelled "overflow 1", so essentially I have about 1/4 of my data backed up, some with no redundancy at all. I only turn my server on when I need to dump more data onto it, or get data off (less frequent ATM) this has as much to do with how I use it as to give as much protection to my existing data set as possible until I can afford to do a serious upgrade.
I've always understood RAID to be about up-time rather than safety ... the whole "RAID ≠ backup" meme someone trots our every time someone mentions RAID in a forum ;). Small Net Builder's classic: Smart SOHOs Don't Do RAID.

Only you can know how valuable your data is, but assuming it's all somewhat valuable, I'd rather spend on backup than network file copy performance. Seconding what's already been suggested: I'd retain your current server as a backup target and build a new primary file server. I appreciate the goal of 100+ MB/s network transfers, but at the expense of backups?

For important stuff, I like the 3-2-1 backup mantra: keep 3 copies, on at least 2 different forms of media, where at least 1 copy is off-site. I have two low-powered servers, the D510MO music server I wrote about previously and an unRAID bulk file server. Neither can saturate my gigabit LAN. The unRAID server comes closest, but only for sustained reads (80MB/s from my WD10EADS, if I remember). Both are fast enough for my wallet. I picked unRAID for a variety of reasons, and I've been generally happy with it but it's definitely not for everyone (really slow writes, for one). The bulk of the unRAID server's data are rips of movies I own; those aren't backed up because a) I don't have the space and b) although I could re-rip them, I probably wouldn't miss most of them! But the unRAID server is the backup target for my music server. I have all the original CDs, but re-ripping, tagging and organizing my collection would take ages and drive me to (more) drinking. I take a 1TB USB drive off site with all my important data. My data set is much smaller than yours; I'm not sure how you'd implement such a backup strategy. Lots of externals, I suppose.

Smallnetbuilder's recent series on 10GbE NAS was interesting reading:
http://www.smallnetbuilder.com/labels/10GbE

[off topic] I looked SSDs for our db server when we upgraded a few years back. At that time, SSDs were too new / untested for my boss to approve of. Our servers come from Dell, who's 5-yr next business day replacement warranty was more valuable than peak performance. In reality, RAID10 with 15k RPM drives is more performance that we can use. Our ERP software's client side application is quite possibly the worst piece of software I've ever used. It's mind-bogglingly stupid, but we're stuck with it. It's a bigger bottleneck than our db server's storage subsystem.
andyb wrote:Fingers crossed ... [snip] ... for my house sale to go through ;)
Good luck! My wife and I will soon be trying to sell our house. I'm not looking forward to it.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Tue Apr 15, 2014 11:49 am

Re: point 2 above, do you need any redundancy in a server that you only turn on to dump data to/from? RAIDs redundancy maintains availability, which is sort of moot if you're turning the server off.
Having witnessed catastrophic HDD failure personally more than once, and working as a computer engineer for over a decade I consider RAID-5's level of redundancy to be vital, and specifically its one step ubove a simple backup solution, which I fully intend on keeping up to date.

As for the other points, you are right there is a much lesser need now than there was, and I will be looking into what happens if you have a single drive spread over multiple physical HDD's and one of the drives Fail's - this is something for me to consider regarding my backups, life would be much easier if I could have a single 12TB backup partition to backup onto.
I've always understood RAID to be about up-time rather than safety ... the whole "RAID ≠ backup" meme someone trots our every time someone mentions RAID in a forum ;). Small Net Builder's classic: Smart SOHOs Don't Do RAID.
No form of RAID is as good as backup, even RAID 1, which will simply duplicate corrupt files and viruses, although RAID 1, 10, 5 or 6 are all better than no-backup at all, but should not be considered a backup in and of themselves, one of my main concerns which RAID-5 does a pretty good job with is not so much the "up-time" and availability, but the ability to backup anything that has not yet been backed up from the RAID-5 array to other drives. This is a concern for me as many people have discovered after their house has been burgled, it often takes a while to figure out whats actually missing, that and the simple ability to replace the faulty drive and carry on with little inconvenience.

If I had the money, I would have 2x separate machines, one with the live data and one with backups, and even more ideal would be for that backup machine to be off-site to protect against theft or catastrophic damage (fire, water, volcano etc).


Andy

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Wed Apr 16, 2014 7:48 am

Thanks for starting this thread. I've learned (am still learning) a lot. But you lost me a bit in that last post. RAID and backup are not comparable, interchangeable or mutually exclusive - they serve different purposes. I think we agree on that.

There are risks of data loss / corruption with any scheme; raid 5 is not exempt. The probability of HDD errors during rebuilds is not trivial, especially with gigantic HDDs. This calculator will do the math:
http://www.raid-failure.com/raid5-failure.aspx

WD Red datasheet lists read error stats:
http://www.wdc.com/wdproducts/library/S ... 800002.pdf

Using the calculator and your proposed array:
Number of disks: 4
Disk size, GB: 4096
Error probability: 1 in 10^14 bits
Result: The probability of successfully completing a rebuild is 27.0%

Theoretical, sure. Nonetheless, YIKES and worth thinking about.

I came across the raid failure calculator on this Storage Review forum thread: Is RAID 5/6 dead due to large drive capacities? I'm going to read through the linked Sun paper (Triple-Parity RAID and Beyond) if time permits. I'd love to hear more experienced members' opinions.

--

Back on topic, I suspect your current CPU is sufficient for 100MB/s + network transfers. Have you had time to experiment with a different OS? If you have a Win7 DVD or ISO, you could install Windows to a spare HDD skipping activation just for testing.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Wed Apr 16, 2014 8:56 am

OK, opinion...
Jay_S wrote:Error probability: 1 in 10^14 bits
This isn't what WD's spec says. They say <1, not 1.
The actual likelyhood of error is supposed to be lower. The issue is, how much lower? It depends on how conservative WD was and on whether they're trying to sell more expensive drives as more reliable.
IIn my experience, most errors aren't random and truely random errors are rarer than this spec might lead you to believe. If you have a high error rate, there's a problem which can be narrowed down and solved. In fairness, non-random errors should have a higher probability than that... but not always much higher (see the infamous CERN bit error rate on their RAID arrays caused by WD firmware).
Try it for yourself! Even if you find your drives to be rock-solid, you might be surprised by what else you find. I have identified much higher rate of errors or even repeatable errors with controllers that people assumed were working properly.
I suspect the problems caused by random bit errors in drives is much rarer compared the problems caused by misbehaving RAID soft/hardware or their intolerance to drives whose behavior is at variance with the designers' assumptions. Random bit errors in RAID hardware might be more common as well. Don't trust RAID setups implicitely, especially the cheaper ones: test them!

But I only wrote the above because I like to hear myself talk.
The real issue is: how will your RAID system handle all manner of errors and unexpected events? There's no reason for a RAID array to simply die on the first bit error, even if this type of error was so common. If you want to estimate risk, this is something you need to be informed about. And ideally something you'd need to test.
Another potential concern also made worse by increasing capacities is the likelyhood that a drive will fail during a rebuild.

Also: large capacities make RAID1 and other more reliable modes than RAID5 more affordable.

And I don't agree that backups are a completely different topic. Sometimes you need to put it simply for people but there is an overlap. You can even use RAID for backups.
More to the point: if random errors in drives were a problem, they would also affect backups. You would therefore need to take care not only of your RAID arrays but also of data integrity in general... and of errors in your backups in particular if you neglected to use RAID. Do you even have a process to test the integrity of your backups? If not, I recommend you start there. It's not rocket science. As I mentionned several times here, even Bit Torrent has such a feature!

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Wed Apr 16, 2014 9:15 am

HFAT is exactly right, in the real world the error rates are much lower and not random.

For example that calculator says that my 4 X 2 TB Greens in RAID 5 would have a crazy 52% chance of failing during a rebuild. However I have the array on an LSI 9260 which does a patrol read once a week, essentially reading every sector just like a rebuild would. It has been doing that for over 1.5 years, so at least 78 times without a single error. It doesn't take a math wiz to figure out that 52% claim isn't even close to reality.

I'd run the calculator on several other arrays I've managed that did patrol reads, but the site has gone down.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Wed Apr 16, 2014 10:09 am

Thanks both of you for taking the time.
HFat wrote:
Jay_S wrote:Error probability: 1 in 10^14 bits
This isn't what WD's spec says. They say <1, not 1.
You're right, sloppy reading on my part.
HFat wrote:There's no reason for a RAID array to simply die on the first bit error, even if this type of error was so common.
I wondered about this. Pardon my ignorance, but my take was that read errors can kill the array during the rebuild, not during normal operation. Because all the data on non-failed drives has to be read to reconstruct the failed drive.
HFat wrote:Another potential concern also made worse by increasing capacities is the likelyhood that a drive will fail during a rebuild.
This is partly the point of the 2009 Sun paper. Summary: capacity has outpaced throughput, lengthening time-at-risk during rebuilds.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Wed Apr 16, 2014 10:42 am

Jay_S wrote:Pardon my ignorance, but my take was that read errors can kill the array during the rebuild, not during normal operation. Because all the data on non-failed drives has to be read to reconstruct the failed drive.
If an error is reported, all the RAID array needs to do is return errors when attempting to read that location and in the rebuild report. It doesn't need to kill the whole array (though the error might render the array useless anyway in some fairly rare cases).
Some RAID soft/hardware might conceivably kill the whole array however, for instance because the drive with the error keeps retrying the sector instead of responding normally. This is why you should take care of the details or trust reliable vendors and use only the configurations and drives they actually support, not just anything which seems to work.

If on the other hand there's a stealth bit error, it's a different issue.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Wed Apr 16, 2014 10:46 am

Jay_S wrote:I wondered about this. Pardon my ignorance, but my take was that read errors can kill the array during the rebuild, not during normal operation. Because all the data on non-failed drives has to be read to reconstruct the failed drive.
Read errors can crop up during normal operation if that area of the drive is read. Read errors for the most part are not random, so it doesn't matter why a sector is being read. If the RAID is not in a degraded state then the read error can be worked around, but I don't know of any controller no matter how crappy that would not flag the problem.
This is partly the point of the 2009 Sun paper. Summary: capacity has outpaced throughput, lengthening time-at-risk during rebuilds.
It's a paper written by a company that had a vested interest in selling their OS/hardware combo with a solution to the "problem": ZFS. It's not 100% BS, just greatly exaggerated.

HFat
Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: Real-World file server CPU requirements

Post by HFat » Wed Apr 16, 2014 10:56 am

Does ZFS really have a solution besides dual parity, something you don't need ZFS for?

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Wed Apr 16, 2014 11:27 am

I tried assembling a ZFS test system years ago and gave up. OpenSolaris was too picky about hardware and what I had on hand was unsupported. The big attraction for me then was end-to-end check summing. Supposedly bit rot is impossible.

I didn't catch the reek of ZFS in the Sun paper. Skimming it, my takeaways were 1) raid 6 is still adequate, and 2) SSDs have the throughput to alleviate the need for triple parity.

@ HFat - regarding the use of Bit Torrent for backup verification, I did a brief search through SPCR's forums and found this post of yours. The purpose is to use the app to generate and verify check sums? I have practically zero experience with bit torrent. Is checksumming baked into bit torrent? I'd almost think it would have to be, to know that all the disparate pieces of a file got reassembled correctly.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Wed Apr 16, 2014 11:36 am

HFat wrote:Does ZFS really have a solution besides dual parity, something you don't need ZFS for?
ZFS does have triple parity (RAID-Z3) with newer versions, which does seem fairly unique. Not sure if it would be particularity useful outside of huge arrays.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Wed Apr 16, 2014 11:56 am

Jay_S wrote:I tried assembling a ZFS test system years ago and gave up. OpenSolaris was too picky about hardware and what I had on hand was unsupported.
FreeBSD is the way to go if you want ZFS. Much better hardware support.
The big attraction for me then was end-to-end check summing. Supposedly bit rot is impossible.
I'm not sure if you meant that bit rot is impossible or not. It's not "impossible" but it is extremely unlikely when talking about modern hard drives. Note this is not in any way the same as a hard drive failure. "Bit rot" in this context is a drive returning something different than what was written before without error.

ZFS check summing is not a bad idea in theory, but it is targeted at the wrong case (HD bit rot) and requires you to have decent hardware (ECC RAM at least) to trust it. It could help in a few edge cases like controller failure but it won't magically make your data corruption proof. Also, ZFS cannot work around any errors that would trip up a normal RAID with the same redundancy level. ZFS has error detection, not correction.

Fun fact: A basic WD Green has more checksum data per block than ZFS does.
@ HFat - regarding the use of Bit Torrent for backup verification, I did a brief search through SPCR's forums and found this post of yours. The purpose is to use the app to generate and verify check sums? I have practically zero experience with bit torrent. Is checksumming baked into bit torrent? I'd almost think it would have to be, to know that all the disparate pieces of a file got reassembled correctly.
Check summing is built into bittorrent.

If you are looking at check summing files and repairing errors take a look at PAR2. It can make recovery records for files that not only can verify them, but repair some damage. For example, you can have it create a 10% recovery record, which would allow up to 10% of the file to be damaged and successfully repaired.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Thu Apr 17, 2014 10:40 am

Back to the original question on CPU requirements... Assuming
  • the network is up to the task (iperf to check), and
    the storage is up to the task (ramdisk? to make sure), and
    the OS is up to the task, then
A person could start with a too-powerful CPU and tweakable BIOS and work backwards, under-clocking and disabling cores until they could no longer saturate gigabit ethernet.

Washu has already stated that his D510MO could saturate gigabit ethernet, so I suspect the CPU requirements are lower than the Atom D510. My D510MO can't, but I suspect it's my slow notebook HDD. It doesn't have enough ram to create a useful ramdisk on it. The next-slowest machine I have that underclocks is a C2D-era HTPC. It also has 8GB of RAM, so ... should I test with a 4GB ramdisk? 6GB?

I also learned that iozone cannot take advantage of one of SMB2's best features: multiple in-flight transfers. That may explain the discrepancies I'm seeing between iozone results and the transfer rates reported by Win7's file copy dialog. Clearly I need a newer tool!

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Thu Apr 17, 2014 2:13 pm

Jay_S wrote:A person could start with a too-powerful CPU and tweakable BIOS and work backwards, under-clocking and disabling cores until they could no longer saturate gigabit ethernet.
This is actually pretty easy to test with the right system. My GA-C1007UN (Celeron 1007U, 2 X 1.5 GHz Ivy Bridge) can set the CPU to any speed in 100 MHz increments with a simple sysctl command.

Source: Server 2008 R2, i5-2400, Intel Gig NIC, large RAID array, 4 GB ISO file.
Dest: FreeBSD with Samba, GA-C1007UN, onboard realtek, Intel 320 160 GB SSD.

Speeds reported from the Server 2008 R2 copy dialog

Code: Select all

1007U Clock  Copy Speed
900 MHz+     120 MB/sec
800 MHz      110 MB/sec
700 MHz      95 MB/sec
600 MHz      75 MB/sec
500 MHz      65 MB/sec
400 MHz      55 MB/sec
It can go down to 100 MHz, but that gives a pretty good picture. The limiting factor seems to be the CPU load caused by the realtek NIC, so a better NIC would help if your system was really slow.

The server is running FreeBSD, which in my experience does not run Samba as well as Linux. With Linux or Windows I suspect the speed vs MHz ratio would be a bit better, but I'm not tearing down my router to test.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Thu Apr 17, 2014 7:27 pm

I was trying out Ubuntu 14.04 on a D510MO board so I thought I try the same speed test. Unfortunately the CPU speed setting does not appear to work fully, I could only set the max or min speed, nothing in between. Might try it on FreeBSD again, I know it can set the CPU clock properly.

Code: Select all

D510 Clock   Copy Speed
1670 MHz     95 MB/sec
208 MHz	   25 MB/sec
I think the max speed was being limited by the drive (WD scorpio black) that I had in it. I have gotten ~105 MB/sec out of the Atom with a faster drive.

Jay_S
*Lifetime Patron*
Posts: 715
Joined: Fri Feb 10, 2006 2:50 pm
Location: Milwaukee, WI

Re: Real-World file server CPU requirements

Post by Jay_S » Fri Apr 18, 2014 6:38 am

Terrific - thanks washu.

For what it's worth, PassMark comparison of andyb's A64 4000+ to the Atom D510 and Celeron 1007u:
Clipboard01.png
For USD $90, the Gigabyte is a really neat little board. The dual NICs is cool, but frustrating that Gigabyte dedicated 2 PCIe lanes to a PCIe-PCI bridge chip and an IDE controller.
You do not have the required permissions to view the files attached to this post.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Fri Apr 18, 2014 7:38 am

That's a very interesting test RE: the actual CPU speed required to top out the 120MB/s LAN throughput rate, and exactly answers my original question, obviously results will vary by OS, HDD and LAN controller, but 900MHz is still a very low requirement in this day and age.

@Jay_S, My CPU is a 939 Pin 2.4Ghz, 89W TDP Model, not an AM2 CPU which I cannot find on the Passmark website.

http://www.cpu-world.com/CPUs/K8/AMD-At ... OX%29.html

So its even more archaic than you thought.

On the plus side, I have decided that I may as well use the CPU, RAM and motherboard from my work PC (that I no longer use for work) that I was planning to keep as a 2nd machine for friends or visitors to use for general purposes or for gaming, as none of these things are likely to happen, I will be making these components the core of my server, reducing the expense by a fair chunk.

this system comprises of a Gigabyte GA-P35-DS4 motherboard that has 8-SATA ports, 6-of them are run off the Intel "ICH 9R" controller hub, the motherboard also has a Realtek 8111B PCI-E Gigabit NC that should do the job, a Core 2 Duo Celeron and 4GB RAM, so a fair upgrade on its own merit (although still old), I have checked that the 2nd PCI-E 16x slot (4x electrically) will work with card other than a graphics card, so I can and will use my RAID card as a controller card for by individual backup drives.

I will do this part of the upgrade sometime next week hopefully.


Andy

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 18, 2014 8:01 am

andyb wrote:That's a very interesting test RE: the actual CPU speed required to top out the 120MB/s LAN throughput rate, and exactly answers my original question, obviously results will vary by OS, HDD and LAN controller, but 900MHz is still a very low requirement in this day and age.
Just remember it's 900MHz running FreeBSD and a crappy NIC. I love FreeBSD, but I acknowledge that it is not the best at everything. Running Linux or Windows I would suspect it would only need 600-700 MHz to get the same 120 MB/sec. Also, if you could somehow get a PCIe Intel or Broadcom server NIC in there it could probably go lower still.
@Jay_S, My CPU is a 939 Pin 2.4Ghz, 89W TDP Model, not an AM2 CPU which I cannot find on the Passmark website.

http://www.cpu-world.com/CPUs/K8/AMD-At ... OX%29.html

So its even more archaic than you thought.
It would not be that different speed wise than the AM2 4000+ that Jay_S posted. Same arch, just a slightly lower clock speed and possibly slower ram, but yours has more cache. Close enough for this comparison. The D510 has two cores, hyper threading and newer SSE versions all helping it reach its score. Your Athlon is much faster at single core tasks, which is what samba is doing in this case. It should have no problems at least meeting the Atom's speed.
this system comprises of a Gigabyte GA-P35-DS4 motherboard that has 8-SATA ports, 6-of them are run off the Intel "ICH 9R" controller hub, the motherboard also has a Realtek 8111B PCI-E Gigabit NC that should do the job, a Core 2 Duo Celeron and 4GB RAM, so a fair upgrade on its own merit (although still old), I have checked that the 2nd PCI-E 16x slot (4x electrically) will work with card other than a graphics card, so I can and will use my RAID card as a controller card for by individual backup drives.
Sounds like a good plan.

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 18, 2014 8:29 am

Since I have the Atom on the workbench, here is why I do not recommend using a PCI NIC, but also showing how a good NIC can help when your CPU is really slow.

D510MO with Intel Pro 1000/MT in the PCI slot

Code: Select all

D510 Clock   Copy Speed
1670 MHz     75 MB/sec
208 MHz      35 MB/sec
It reduced the max speed by 20 MB/sec, but increased the speed at 208 MHz by 10 MB/sec vs the onboard PCIe Realtek.

andyb
Patron of SPCR
Posts: 3307
Joined: Wed Dec 15, 2004 12:00 pm
Location: Essex, England

Re: Real-World file server CPU requirements

Post by andyb » Fri Apr 18, 2014 8:55 am

washu wrote:Since I have the Atom on the workbench, here is why I do not recommend using a PCI NIC, but also showing how a good NIC can help when your CPU is really slow.

D510MO with Intel Pro 1000/MT in the PCI slot

Code: Select all

D510 Clock   Copy Speed
1670 MHz     75 MB/sec
208 MHz      35 MB/sec
It reduced the max speed by 20 MB/sec, but increased the speed at 208 MHz by 10 MB/sec vs the onboard PCIe Realtek.
That,s good to know, its just a shame they (good quality NIC's) are so expensive. Before I got myself a cheap (£7.50) PCI-E Gibatit NIC that uses a Realtek 8111x chip that's capable of 108MB/s in my testing, and blew away the Intel PCI NIC (not sure of the model, but that was a £25 card when it was new) that I had tried, just worth noting the drawbacks of PCI vs PCI-E, although this of course does not take into account the vastly higher CPU usage requirements of a full-hardware NIC. This is of course why most people will never spend the extra cash on a server grade NIC when CPU performance is so abundant in modern computers.


Andy

washu
Posts: 571
Joined: Thu Nov 19, 2009 10:20 am
Location: Ottawa

Re: Real-World file server CPU requirements

Post by washu » Fri Apr 18, 2014 9:08 am

andyb wrote: That,s good to know, its just a shame they (good quality NIC's) are so expensive.
Well true "server grade" NICs are expensive, but you can get decent ones for not too much. An Intel Desktop CT adapter is around $35-40 (CAD) here and performs great. It is what I am using in my main fileserver.

fastturtle
Posts: 198
Joined: Thu May 19, 2005 12:48 pm
Location: Shi-Khan: Vulcan or MosEisley Tattonnie

Re: Real-World file server CPU requirements

Post by fastturtle » Thu Jul 31, 2014 7:37 am

Washu:

You listed the good/bad Intel Chips as to Raid features and as I have a Q87, I think it's one of the good ones.

What I'm currious about is whether the Intel Fake Raid is compatible with the Linux Raid/LVM software as I'm in the process of building a clean Gentoo Linux with UEFI support.

Post Reply