I'm a bit on the fence here. I largely agree with the Techdirt rebuttal, and I use an adblocker myself. But, I've also written reviews for SPCR, and I know first hand the labour that goes into them. For a little while, I treated it as a full time job.
I think one of the big points that needs reiterating is that bandwidth and hosting is only a small fraction of the costs of SPCR. The biggest cost by far is labour. Mike's estimates on the amount of time spent on each article are conservative. Every time a product needs retesting, or we need to confirm results, or we discover a new quirk of a product, the amount of time spent testing and rewriting the review can double or triple. The best output I ever managed was two articles per week, working full time.
The non-fluff (i.e. non-review) articles that Aris has so sanctimoniously demanded take even longer, as do products that we don't have a set formula for reviewing. I spent about three weeks reviewing
the Asus Xonar HDAV, partly because I'd never reviewed a sound card before, partly because I had to do numerous retests and reanalyses, and partly because it was very unclear exactly what the card was capable of (and why it cost $300). Developing our fan test methodology took months (years if you count Mike's efforts before I started focussing on it). These unusual articles are among the most important on the site, but they are often some of the least read (i.e. least directly profitable), and they are definitely the most time consuming to produce.
On the topic of multiple samples, the idea of scientific rigour is appealing, but the practicality is daunting. Even "spot checking" as was suggested would most likely almost double test times, as there is a considerable amount of setup overhead per sample that can't be eliminated by reducing test points. On top of that, my experience with multiple samples has been that it is more likely to reveal a confidence interval in our test procedures than true sample variance. Nearly all results vary a bit from test to test, regardless of whether the same sample is used.
The only example I can think of that can definitively be chalked up to sample variance is our original Ninja sample, which produced shockingly good results over a number of years, and confused the hell out of us every time we retested it as a "standard" reference.
On a side note, the multiple fan samples in
the article mentioned by dhanson865 were tested for subjective variance only (which is why no testing details are listed), experience having taught me that whatever measurable variance there was was almost certainly below the threshold of accuracy of our measurement tools.
With regard to charging for reviews (not an uncommon practice), I can't argue with the fact that it would compromise partiality. There are already enough issues with selection bias, simply because Mike has better relations with some manufacturers than others (partly because a bad review on our part strongly discourages future samples).
The preponderance of Antec cases on the site has already been noted. This is not just because Mike has been involved in designing some of their most successful cases, though that is part of it. It's because Antec produces genuinely good cases, especially for our audience. Mike's role helped make this so, but that doesn't make their cases any less relevant for our readers. It's not a coincidence that five of the most popular articles on the site are Antec reviews; most of the rest are recommended articles, and no other manufacturer has more than one review in the top read list.
I really like the idea of putting a bounty on certain review products (i.e. enough donations guarantee a review). Executed properly, I think this could work very well. However, it does conflict with SPCR's role in helping readers discover new silent products. SPCR drove the popularity of a number of standard products by discovering them first (Nexus fans, Scythe heatsinks, Scythe fans). If we relied solely on audience demand for reviews, none of these would have come to light.
There's also the issue of there being far more demand for reviews than time to do them. Selection of products for review is currently governed by Mike's editorial instincts (selection bias in a positive manner). Products are prioritized by how interesting they are likely to be to our readers, and how likely they are to actually get a positive review (based on a visual inspection and the weight of experience). There is probably a year's worth of backlog products in the pipeline, and many products simply never see reviews because they are out of date or irrelevant by the time we get to them. Point being, we already break promises of reviews to manufacturers all the time because we simply don't have time to do every review. Reader-donated money creates obligations towards readers that are likely to be broken unless we can find a way to guarantee that the reviews get done. Guaranteeing that every product that sees a certain level of donation is not the way to do this.
My suggestion would be to limit donation-funded reviews to one every month. The review could be conducted like a contest or an auction. A pledge-based system in which donations are only collected if the product in question gets the most donations in a given month could work. A more lucrative (but less fair) option is to accept donations for any / all products, but only review the one that brings in the most cash. Donation pots could carry over month-to-month, so products that come second one month might come first the next. This also makes the more lucrative option more fair, since it guarantees that most of the money raised will eventually be used (the amount of less popular donations will probably be a relatively small fraction of the total).
A big problem will be to prevent "corporate" donors from buying reviews. Allowing manufacturers to bid on reviews pretty much guarantees selection bias, since their buying power means the corporation with the deepest pockets gets a review. It almost certainly takes consumers out of the equation, since the magnitude of corporate "donations" are likely to drown out any reader contributions. I suppose if the goal of SPCR were to maximize profits this would be a good thing (short term at least), but it does a terrible disservice to the readers.
Ok, enough rambling. Let's hear your feedback.