The case against SSDs

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Post Reply
Friend of SPCR
Posts: 243
Joined: Fri Dec 28, 2007 11:56 am
Location: NH, Netherlands

The case against SSDs

Post by xen » Mon Jul 25, 2016 7:13 pm

I just want to give a quick rundown of all the various reasons you can have not to want to use SSDs as separate "OS" disks.
  • Whatever you say, the price per GB is still much higher, mandating any serious system to now have a mandatory minimum of two disks: one SSD for the OS, and one HDD for the data
  • This means that the cost of computing has gone up, not down. It also means investments in HDD technology are going down, which in turn will make (fast) HDDs more expensive; this then also drives up the cost, at least for disks that are now getting replaced by SSDs
  • Defaulting to SSDs for system disks will make software designers lazy. Performance is so enormous compared to the past that developers will stop investing time in tuning and improving the performance of e.g. on-disk reads to no longer care about contigiousness of data. We have seen this before with OS makers (Microsoft) not optimizing code because "in one year time all of the hardware components will have caught up with our abysmal performance anyway". This then makes it mandatory to use SSD which in turn makes it mandatory to have 2 disks
  • This simply gives rise to bad software design. The game Diablo III was rumoured and experienced to have extremely incomprehensible lag issues where CPU and GPU were not bottlenecking but perhaps the HDD was? However people with SSDs also reported this lag. If you are going to become lazy in designing software, and hence do not care about creating good architecture anymore, issues are going to sneak in that would otherwise have been caught by the tail and thrown out. Ignoring one important part of software design means you will also perform less in other areas that still matter, because your level of caring simply goes down.
  • SSD is still at least 3-4 times as expensive as HDD technology. Most of the benefits apart from application load and game performance are unnecessary to most. Whether you can boot your system in 10 seconds or 50 should be irrelevant; long time ago people devised "standby" that should render booting your system pointless except for laptop users. In other words, those short boot times solve a problem that doesn't exist. Most of the time for a laptop you would use either suspend or hibernate, and booting from scratch is simply not part of normal or recommended computer usage even for laptops.
Now the rundown of some deeper implications:
  • Any advanced partitioning scheme is going to be hindered by having to use SSD for system and HDD for data. The thing is no longer homogenous and redundancy becomes almost impossible. How are you going to create RAID 1 redundancy for your SSD? Well, you need another SSD. Now you also need another HDD. How are you going to expand this with increased capacity? Well, that is entirely pointless for SSD, so you don't. For HDD you can still do it, but now you have FIVE drives in your system just to take care of that. You just cannot mix the SSD and HDD components of your system without dedicating that SSD to caching-function-only.

    What I mean of course is that you now have 2 SSDs in mirror raid, and e.g. 3 HDDs in RAID 5.
  • Using onboard "hybrid" SSHDs in which SSD and HDD are combined on disk, induces the same annoyance and waste that we saw in Hi-fi stereo towers. Inevitably one component broke rendering the rest of the system pointless even though most was still working. (In one SSHD test I saw the SSHD have longer boot times than any other disk in the test). Having a built-in system means you cannot fine tune anything yourself or make any choices about it. You must hope it works well or else. Further, you now cannot separate the two devices; you cannot put the HDD part into some other system where it needs no cache; you cannot take the SSD part and use it to cache something else. It becomes like the AMD APUs, not good for anything really. It doesn't know what to be. The SSD cache can or could also have security implications, and it is hard to tell in advance what the impact will be. You won't be able to address it directly because it is addressed as part of the total volume and you cannot tell what parts are cached and which aren't. Deleting or discarding data would clear the cache, but there will probably be no way to reset it manually. You woudn't know. Perhaps a trim operation would solve it, perhaps not.
  • Having a real independent (separated) component-based caching feature or setup would solve the homogenity issue because you can take off the cache and the base system doesn't change. You could cache an entire RAID volume (or partition, or array) with a single SSD (or partition thereof) and be allowed to take it off whenever you want. Now you can do RAID whatever way you want and never be hindered by design issues preventing you from choosing the setup you want. You are not inhibited in your design because all disks have the same performance and you can combine them at will. However this does imply and require that certain basics for this are present in your OS or firmware or hardware system to account for this need. I suspect that availability or dependability is still minimal.
In short I feel that caching design should become mandatory but a good OS cache does not require SSDs of the size of 256GB or up. You can, I believe, have an excellent cache with something as inexpensive as the Kingston SSDNow mS200 clocking in at about €40 for 60GB and that is already a lot of caching space.

In current motherboards, M.2 would probably be most convenient for this; but the cheapest offering, Transcend, may not be as reliable. (I have tested the Transcend MSA370 and its write speeds are just abysmal. I do not understand how they can put anything on the market that cannot write faster than 14MB/s. Another product, the MTS400 (M.2) is said to crash systems due to an unsupported SATA III instruction. Link here but another one (ADATA Premier SP600NS34) appears to also suffer from it. It's the sort of unreliability and head-ache that didn't exist when we still used HDDs.

Microsoft Windows allows "ReadyBoost" by designating a file (created by the system) on a flash device (that has to be mounted as a drive letter) to be used for a disk cache. While very easy to install, it defeats the purpose of having something transparent to the system. Nevertheless, it could be useful. Dedicated SSD drives meant for caching come [or came] with DataPlex software, review here or here. Dataplex was acquired by Samsung in May 2015 and discontinued the product although a final no-license-restrictions version was made available read here although it only works (hardcoded) with a few selected SSD drives. Intel has Rapid Response technology that I assume is just part of their motherboards as a RAID offering.

FancyCache is considered to be an alternative which was replaced by PrimoCache at a cost of about $30 (which is reasonable, even though the cache drive itself may not cost more than that) althougth they do not market it as an SSD cache, but rather as an improved memory cache; it can use SSDs as ReadyBoost does, though. All in all we do not see a standardized or very thorough solution on the Windows platform here. AMD does not support any caching feature in its motherboards, apparently.

StarTech offers a 'dedicated' RAID controller with hybrid support here but it is a rather expensive solution at > € 80 and you never know how it is going to work cross operating system. Real hardware raid cards meanwhile still start at about €150.

I have personally used a slow device mentioned earlier in Linux using LVM Cache, although that is outside the scope of what I am writing here. I feel it is due to the Linux kernel and its many bottlenecks (as stated here and here for example) that I experienced frequent freezes of my system sometimes lasting longer than 2 minutes at a time due to some IO queue filling up and not being emptied soon enough. (The Linux kernel has a single IO queue for all devices in terms of buffers, as does the Windows kernel, but for Linux it works really really bad) - this was on a device with 14MB/s write speeds, as noted. Something that should ordinarily NEVER result in this sort of behaviour. However, it does imply that on Linux not everything is roses and moonshine either.

Friend of SPCR
Posts: 243
Joined: Fri Dec 28, 2007 11:56 am
Location: NH, Netherlands

Re: The case against SSDs

Post by xen » Mon Jul 25, 2016 8:51 pm

In short unless someone can better inform me about the current status quo, I will conclude that:
  • Using SSDs as a System Only DISK is a flawed model
  • Using it as a cache is easy but limited with ReadyBoost (on Windows) and Intel supports it (using motherboard Raid) but AMD is apparently lacking any such thing
  • The de facto standard used by SSD manufacturors was purchased by Samsung and hence no longer the de facto standard used by other SSD manufacturers
There may be other solutions that do equally well (such as PrimoCache) but it remains a make-do solution that requires some sort of driver or software component in the OS. Cross platform, you will need to invest in a similar thing on the other platform.

RAID itself is rather iffy cross-platform with many firmware RAID solutions (such as that of AMD) barely supported on Linux. Meanwhile Linux chooses to do everything in software from RAID (mdadm) to caching (LVM cache, bcache) to partitioning (LVM) such as to simply not be dependent on slow-moving firmware solutions such as BIOS and agreed-upon standards.

The Linux solution however often incurs a usability penality as pre-boot firmware is missing and configuration software often abysmal. Proprietary solutions such as that of AMD may be very unreliable and then you end up with the truism "Hardware RAID is the only thing that works". Many mATX motherboards consequently also do not have any PCIe*4 slots available for present day RAID cards such as the cheaper RocketRAID (by HighPoint) (in case you want more than 2 ports).

Many 4-port RAID cards require x4 slots whereas 2-port cards may suffice with x1. The most compatible solutions are often PCI-X cards that are compatible with regular PCI 66Mhz slots. It seems that any solution that is not on-card (but can combine SATA ports from various devices) would be better equipped to deal with all kinds of circumstances, and as such you would either need an on-board solution (motherboard) or simply a software solution that can do it for you in the operating system that you use.

Comments welcome
The HighPoint "cheap class" is here: 600 HBA Series.

Linux allows a cache setup by creating a cache pool with a certain metadata pool (they are merged together) and then merging it with a regular volume (all done in LVM) using the "lvconvert" command. After that the "origin volume" is going to get cached using the cache pool with write-through as the caching mode (by default) and the "smq" (Stochastic Multi-Queue) caching policy selected as the default. Some kind of advanced mode that requires no configuration (the only alternative is "mq" which has a few parameters you can set). The latter which is my preference as I have no clue what the other thing does ;-). You can then deactivate the caching while running your system with it, by doing "lvconvert --splitcache". This seems to be the only real solution in Linux (apart from using bcache) but won't work on RAID arrays that are non-LVM (dm-raid, but not LVM dm-raid) because the actual caching system is called dm-cache, and it would be able to handle dm-raid just fine I am sure (it is all the "device mapper") but that won't work with LVM itself. But it can if you use LVM for your RAID. I think. However in the good tradition of Linux, documentation is often so confusing that you'd rather wish you had never started using a computer.

All in all it is still just a hugely complex solution that only the most ardent supporters of hilarious wasted time will support or find pleasure in.

So basically the potential in Linux exists for caching entire arrays I'm sure using LVM, and it is probably not *that* hard if you know how it works and what you need to know (catch-22 there) and it is probably possible to use a different measure if you are using "real" "mdadm" software raid, since that also creates virtual devices that could be used for caching.

LVM (the Logical Volume Manager) is a great product, it's just that it seems rather unreliable when things come down it it, mostly due to the designers not caring all that much about other people who don't know as much as they do. "Oh, that's easy, you do this and this and that". So the software itself is usually pretty reliable, your knowhow in knowing how to deal with the fallout of something going wrong, is not.

Apart from that it feels a bit convoluted at times, because it is such a "high level" solution. It's not as simple as installing a device driver and you're done.

Posts: 2
Joined: Tue Jul 26, 2016 4:26 am

Re: The case against SSDs

Post by r0dISK » Tue Jul 26, 2016 4:34 am

@xen: +smart RAID 1 accelerate reading speed of data (as in RAID 0)...

Friend of SPCR
Posts: 243
Joined: Fri Dec 28, 2007 11:56 am
Location: NH, Netherlands

Re: The case against SSDs

Post by xen » Tue Jul 26, 2016 7:06 am

Yes that is true.

Although I don't know if it is true for all raid controllers, I hope it is.

I still wonder what the actual factual benefits of RAID 10 are in speed.

I know when I tested my random reads and writes went up greatly.

A RAID 10 is supposed to spread reads over the 2 stripe-sets. Each stripe set in turn should then be able to benefit from the improved access time and read throughput. It should also really intermittently use the 2 mirror disks in each set for individual seeks, such that the disk with the smallest access time or the shortest queue is consistently getting used as the device that is going to do the read. However if this was being fully utilized a RAID 10 should then be able to read at 4x the speed of the individual drives. In my tests, this was not so, but rather double the individual speed (I think) leading me to think that AMD raid is not doing that thing you link to here.

Posts: 2
Joined: Tue Jul 26, 2016 4:26 am

Re: The case against SSDs

Post by r0dISK » Tue Jul 26, 2016 2:19 pm

90% of RAID solutions (HW/SW) is RAID 10 faster than RAID 1 (Read, Write, IOPS)
the remaining 10%... :)

Luke M
Posts: 169
Joined: Tue Jun 08, 2010 4:09 pm
Location: here

Re: The case against SSDs

Post by Luke M » Sat Jul 30, 2016 8:18 pm

Are you nostalgic for tape drives? They still exist, but only for niche applications. Hard drives are going the same way.

Friend of SPCR
Posts: 243
Joined: Fri Dec 28, 2007 11:56 am
Location: NH, Netherlands

Re: The case against SSDs

Post by xen » Sun Jul 31, 2016 6:06 am

Nope I never used them.

I am nostalgic for CDs though, a little bit.

But there is no point in accusing anyone of nostalgia here. They are about as costly as a hard disk (for example this one at 37€ for 2.5TB (actually, that's not true, a 2TB harddisk is about € 77, or twice as much) and is still the only real solution for off-site backup, or at least off-line backup, but I guess most companies employ remote storage though some data link.

It's just too cumbersome to deal with tapes.

You write "nostaliga" as if HDDs are a thing of the past, and that is your delusion.

No one sane will store any kind of data collection on SSDs these days and that will remain for a very long time to come.

If anything we will see "atom" or "molecule" based storage in a grid, not anything like what NAND is. So you're living in a future that doesn't exist.

And then from that future that doesn't exist you will look back and call the people of today nostalgic....

I call that "delusional" ;-).

It is delusional to think that SSDs are cost-effective for most applications. It is also delusional to think that your ordinary computer user benefits from SSDs as much as people say they do. Their reliability is just less from a "convenient failure" standpoint, and the speed problem they were trying to solve didn't really exist to the extent that it has now been "solved". People hardly ever complained about hard disk speed as evidenced by the fact that many here on SPCR even ran notebook 2.5" HDDs as their main system drive. How could SSD ever be a necessity if people chose on purpose to use slower disks in their systems than what was available, by far?

It is incredulous and completely retarted to think that when a lot of people were perfectly content with running their day-to-day systems on slow 5400 2.5" RPM drives, that there was then a "huge problem" to be solved in the hard disk space, when faster drives, such as ordinay 7200 3.5" ones, or even 10k ones for the ones that cared, already existed in full; most people didn't even care to get a 10k drive!!!.

The sweet spot was 7200 but only barely. A lot of people opted for slower 5400 ones instead, and also did so on purpose (for instance to have less wear, or less sound).

When I installed a data disk back in the day in some server, my friends couldn't understand why I had purchased a 5400 disk instead of a 7200 one. To me it was obvious and most people wouldn't disagree.

I only, in the end, used 7200 to store games on. Yes of course that helped, I think it did.

But there was a lot of patience and there is a sense of joy in waiting too if it is not too much. It's not bad at all if something takes a little longer, as long as it is bounded, you know what to expect, and you can sit back and relax in the meantime. Not all jobs require instant resolution and living at a slower pace in general is good for your health.

So people never complained about storage really.

It was always about RAM (to prevent hard disk swapping, which is still the case today I guess) and CPU speed. Next came graphics cards. Next came storage.

Storage was usually the 4th on people's lists. Of course today it may be the first, but that is because of the evolution we've seen and its detrimental effects, I guess. Today the SSD is seen as the first upgrade of course and it also has the biggest impact.

Yet it comes at a cost and that cost is not discounted for. The cost may well be called "turning into a zombie". It can also be called "instant gratification" in a way.

So yeah. It is remarkable how the ones that criticise the sensible usually have the least to say.

Posts: 1753
Joined: Thu Jul 03, 2008 4:27 am
Location: Switzerland

Re: The case against SSDs

Post by HFat » Wed Aug 03, 2016 10:40 pm

I don't know why you're wasting your times writing these endless posts that few (if any) are going to read as opposed to skim. I agree with some of your points and in particular about the irrationality underlying blanket recommendations like "you should put your OS on an SSD". But your rants have basically the same problem: you seem assume whatever situation and priorities you have in mind is what should be taken into consideration at the expense of everything else. Whatever.
On to a specific point about which I could conceivably make a constructive comment:
xen wrote:How are you going to create RAID 1 redundancy for your SSD? Well, you need another SSD.
No, in many cases you can use an HDD (and indeed sometimes a partition on the same HDD which you use for bulk storage) in write-mostly mode (or equivalent) as a second member in RAID1 (or equivalent). Obviously in some cases it would be a terrible idea (it depends on your particular performance requirements).

Posts: 617
Joined: Fri Mar 14, 2008 5:18 am
Location: London, UK

Re: The case against SSDs

Post by Cistron » Thu Aug 04, 2016 1:02 am

I agree with you on the software side of things. Though developers will always have to cut corners. It's the trade-off on whether you want incredibly fast vapourware (potentially economically not viable) or software.

However, you talk about the average user a lot. The average user wouldn't dream about using a Raid for data security. And it doesn't protect against their own mistakes anyway. A good share will have their most important files in cloud storage (which includes versioning) such as Dropbox or Google drive. Music and videos are also largely streamed (or cloud backed-up by the supplier). I can't remember the last time I opened iTunes to listen to songs (Spotify is more convenient).

I also don't think that conventional hard-drive development is particularly slowed down. Helium drives, shingled recording, ...

Post Reply