Good affordable SSDs?

Silencing hard drives, optical drives and other storage devices

Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Mon Jan 26, 2009 3:05 pm

nutball wrote:My reading of the situation is that there has to be cache *somewhere*. It can be on the drives. and/or it can be on a proper RAID controller as you have. In principle the OS could use main memory for a write-back cache and reorder the writes. Only when there's no decent-sized cache in the chain does the stuttering manifest itself.
exactly, much better put than i was able to articulate. the thing is though, is each piece in a computer is "supposed" to work in any "PC" environment it is put into, reguardless of outside factors. All it should need is the proper data cable connector and proper power cable connector. Which is why the cache needs to be on the drive itself. Its not like SDRAM cache is all that expensive. They've been putting them on HDD's for years now for the same purpose, to sequence data and provide higher burst performance for small files.

I think the JMicron controller itself actually has a lot less to do with this problem than the lack of cache has.

As far as the affordable topic title goes. Newegg is now selling the 80gb x25-m for $399, and the upcomming OCZ vertex 60gb with 32mb of cache is on preorder at zipzoomfly for $250.

Mankey
Posts: 173
Joined: Tue Apr 20, 2004 4:39 pm

Post by Mankey » Tue Jan 27, 2009 6:37 pm

FartingBob wrote:
bgiddins wrote:
Mankey wrote:Do the cheap CF to IDE adapters suffer from stuttering issues? I know the performance would suck ass, but don't want to deal with the suttering issue.
um, my impressions are that this is like buying a bicycle because you eventually have to pull over in the car to buy petrol, and you don't like stopping for petrol...
Agreed, buying a very expensive SSD and then limiting it to slower than USB speed is pointless. Even from a silence POV its not worth it, just get a 5400rpm laptop drive.
I wasn't refering to that at all. I was refering to get a compact flash card and using a adapter for it to be used as a hard drive. Much cheaper, albeit slower option than a real SSD. Just wondering if i would run into stuttering issues or if i would get smooth (slow) performance.

wywywywy
Posts: 69
Joined: Tue Sep 16, 2008 11:47 pm
Location: UK

Post by wywywywy » Wed Jan 28, 2009 1:19 am

Compact Flash can be fast, if you get one that supports UDMA (i.e. the 300x-plus cards) and an adapter that supports it.
But then it wouldn't be so cheap.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Thu Jan 29, 2009 4:11 pm

Hey dhanson, where did you get that JMicron info?

My 120GB CoreV2 never performed at its spec'd sequential speeds, and stuttered all the time. It has a USB port that was supposed to be usable for firmware updates but then OCZ announced that they would not be providing firmware updates for Core V2 after all.

I now have a G.Skill Titan, 256GB, and I'm actually getting the 200MB/sec sequential read speed on it all the time. I also see sequential writes peaking at 165MB/sec, though usually averaging a bit lower than that. It's met all of its advertised specs, as far as I'm concerned, while the CoreV2 never met its specs. But if JMicron has newer firmware for this CoreV2's controller, I'd love to try it out.

The Titan really hit the mark for me, at under $2/GB. Before it hit the market, it was cheaper to assemble an array of 35MB/sec CF cards than to buy an SSD. Now the SSDs are finally more cost-effective than multi-slot CF-IDE adapters.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Thu Jan 29, 2009 6:15 pm

highlandsun wrote:Hey dhanson, where did you get that JMicron info?
http://www.dailytech.com/Exclusive+Inte ... 14004c.htm

I don't know that they have offered to let anyone download a new firmware but they obviously have different versions. If you own a drive you should hound their tech support about getting a firmware update.

If I had my choice after reading that interview I'd want a firmware that reduced the usable space of the drive noticeably so that wear leveling / controller optimizations would work better.

Obviously they went for 30/32, 60/64, and 120/128 with the V2 but I'm thinking 26/32, 52/64, and 104/128 wouldn't have been a bad thing. I'd rather have performance/reliability than capacity.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Fri Jan 30, 2009 5:28 am

highlandsun wrote:Hey dhanson, where did you get that JMicron info?

My 120GB CoreV2 never performed at its spec'd sequential speeds, and stuttered all the time. It has a USB port that was supposed to be usable for firmware updates but then OCZ announced that they would not be providing firmware updates for Core V2 after all.

I now have a G.Skill Titan, 256GB, and I'm actually getting the 200MB/sec sequential read speed on it all the time. I also see sequential writes peaking at 165MB/sec, though usually averaging a bit lower than that. It's met all of its advertised specs, as far as I'm concerned, while the CoreV2 never met its specs. But if JMicron has newer firmware for this CoreV2's controller, I'd love to try it out.

The Titan really hit the mark for me, at under $2/GB. Before it hit the market, it was cheaper to assemble an array of 35MB/sec CF cards than to buy an SSD. Now the SSDs are finally more cost-effective than multi-slot CF-IDE adapters.
Does the Titan stutter?

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Fri Jan 30, 2009 6:28 am

m^2 wrote:
highlandsun wrote:Hey dhanson, where did you get that JMicron info?

My 120GB CoreV2 never performed at its spec'd sequential speeds, and stuttered all the time. It has a USB port that was supposed to be usable for firmware updates but then OCZ announced that they would not be providing firmware updates for Core V2 after all.

I now have a G.Skill Titan, 256GB, and I'm actually getting the 200MB/sec sequential read speed on it all the time. I also see sequential writes peaking at 165MB/sec, though usually averaging a bit lower than that. It's met all of its advertised specs, as far as I'm concerned, while the CoreV2 never met its specs. But if JMicron has newer firmware for this CoreV2's controller, I'd love to try it out.

The Titan really hit the mark for me, at under $2/GB. Before it hit the market, it was cheaper to assemble an array of 35MB/sec CF cards than to buy an SSD. Now the SSDs are finally more cost-effective than multi-slot CF-IDE adapters.
Does the Titan stutter?
@highland
who cares about sequential reads/writes. tell us how it does with random reads/writes. without cache to sequence data, its inevitably going to be doing a lot of random reads/writes

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Fri Jan 30, 2009 4:16 pm

Aris wrote:
Does the Titan stutter?
@highland
who cares about sequential reads/writes. tell us how it does with random reads/writes. without cache to sequence data, its inevitably going to be doing a lot of random reads/writes
I care that the product meets the specs that were advertised. The Titan does, the CoreV2 doesn't.

As for stuttering - I've only been using it for 2 days. I've seen it stutter once when I was applying a lot of system updates. Aside from that, hasn't happened. So yes, the problem is still there, but it's a lot less frequent than with the CoreV2. I mainly compile and test code; for this task there's a lot of random reads and mostly cached writes. In this workload it runs perfectly smoothly, and much faster than any of my previous drives. (120GB CoreV2, 7200RPM Hitachi 100GB.)

Unfortunately you very quickly get used to the speed. At one point I had to boot up my old laptop with 7200RPM drive to find some backup files, and it was unbearably slow in comparison. (And no, that wasn't a bad or poky-slow laptop, in reality.)

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Fri Jan 30, 2009 5:23 pm

dhanson865 wrote: If I had my choice after reading that interview I'd want a firmware that reduced the usable space of the drive noticeably so that wear leveling / controller optimizations would work better.

Obviously they went for 30/32, 60/64, and 120/128 with the V2 but I'm thinking 26/32, 52/64, and 104/128 wouldn't have been a bad thing. I'd rather have performance/reliability than capacity.
Until your usage actually hits close to those levels, it wouldn't make any difference. By the way, there is an ATA command to set the number of sectors that the drive will report. Any user can issue this command to lower the number (thus leaving more for spares). On some drives you can raise it as well (thus eating into the spares)...

Of course, if you did something stupid (like a Full Format instead of a Quick Format, that actually writes to every sector of the partition) you would be relying entirely on the hard spares, even if you don't fill the partition up.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sat Jan 31, 2009 7:52 am

highlandsun wrote:Of course, if you did something stupid (like a Full Format instead of a Quick Format
I hadn't thought about that. I usually do a full format on traditional hard drives. Guess this gets added to the list.

1. Never do a full format on an SSD

2. Never put a swap file on an SSD (Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive) )

3. Never defragment an SSD

4. Never assume your SSD won't fail. Always make backups to another media type.

can you think of any other rules about SSD use to add?

josephclemente
Posts: 580
Joined: Sun Aug 11, 2002 3:26 pm
Location: USA (Phoenix, AZ)

Post by josephclemente » Sat Jan 31, 2009 1:17 pm

highlandsun wrote:Of course, if you did something stupid (like a Full Format instead of a Quick Format, that actually writes to every sector of the partition) you would be relying entirely on the hard spares, even if you don't fill the partition up.
How damaging is this? I couldn't find any documents about this. While I never did a full format on my 32GB SSD, it was very easy to make use of a large portion of the drive during my Vista install before turning off the hibernation file and moving the page file to another drive. Eventually more applications will be added...

frostedflakes
Posts: 1608
Joined: Tue Jan 04, 2005 4:02 pm
Location: United States

Post by frostedflakes » Sat Jan 31, 2009 5:18 pm

In Windows at least, I don't think there's much difference between full and quick format. Both delete the master file table, and then the full format also checks for bad sectors (this is why it takes so much longer). AFAIK, neither overwrites the drive with zeros, which is probably what would be really damaging to an SSD.

josh-j
Posts: 11
Joined: Sat May 17, 2008 9:13 am
Location: UK

Post by josh-j » Mon Feb 02, 2009 11:40 am

frostedflakes wrote:neither overwrites the drive with zeros, which is probably what would be really damaging to an SSD
Why would the drive being overwritten damage anything. Surely it would just equal one write of the 100,000 (or whatever) each part of the drive can last. Not particularly great, but surely not that awful?

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Post by nutball » Mon Feb 02, 2009 12:05 pm

josh-j wrote:Why would the drive being overwritten damage anything.
Because NULL != zero. A sector written full of zeroes is not an empty sector, as far as the drive is concerned, even though it is as far as the OS is concerned. The drive has no knowledge of why the OS wrote what it wrote to a given piece of storage. The drive has to assume that that sector contains information that the OS wishes to retain, hence it can't use that sector to replace another in a read-modify-write operation.

Filling a drive with deliberately written, unwanted zeroes uses up *all* of the latitude the drive has for wear-levelling or whatever else, and does so without storing any information that really needs to be stored.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Mon Feb 02, 2009 2:13 pm

dhanson865 wrote:
highlandsun wrote:Of course, if you did something stupid (like a Full Format instead of a Quick Format
I hadn't thought about that. I usually do a full format on traditional hard drives. Guess this gets added to the list.

1. Never do a full format on an SSD

2. Never put a swap file on an SSD (Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive) )

3. Never defragment an SSD

4. Never assume your SSD won't fail. Always make backups to another media type.

can you think of any other rules about SSD use to add?
For my money, I'd say Never use any search indexing tool, but I guess it's debatable. In my experience, the drive's read rate is fast enough that I can tolerate exhaustive searches on it. But indexers will chew up a lot of write cycles by updating their index DB over and over...

About your Note #2 - on Linux the OS Hibernates into the swap space. So I create/format a swap partition, but leave it unmounted/inactive all the time. When I want to hibernate, I run a script that mounts/activates the swap space first. That way I can hibernate when I want to, but otherwise don't use swap.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Tue Feb 03, 2009 8:37 am

highlandsun wrote:
dhanson865 wrote:
highlandsun wrote:Of course, if you did something stupid (like a Full Format instead of a Quick Format
I hadn't thought about that. I usually do a full format on traditional hard drives. Guess this gets added to the list.

1. Never do a full format on an SSD

2. Never put a swap file on an SSD (Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive) )

3. Never defragment an SSD

4. Never assume your SSD won't fail. Always make backups to another media type.

can you think of any other rules about SSD use to add?
For my money, I'd say Never use any search indexing tool, but I guess it's debatable. In my experience, the drive's read rate is fast enough that I can tolerate exhaustive searches on it. But indexers will chew up a lot of write cycles by updating their index DB over and over...

About your Note #2 - on Linux the OS Hibernates into the swap space. So I create/format a swap partition, but leave it unmounted/inactive all the time. When I want to hibernate, I run a script that mounts/activates the swap space first. That way I can hibernate when I want to, but otherwise don't use swap.
I wonder if microsoft is going to have a default set of system settings in "windows 7", in the event that your using an SSD to 1) take advantage of the increased bandwidth and 2) remove any operations that may perminantly damage your drive.

The way it is now, it seems like if you try to use an SSD without being tech savy you can easily destroy your drive prematurely.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Tue Feb 03, 2009 10:59 am

nutball wrote:
josh-j wrote:Why would the drive being overwritten damage anything.
Because NULL != zero. A sector written full of zeroes is not an empty sector, as far as the drive is concerned, even though it is as far as the OS is concerned. The drive has no knowledge of why the OS wrote what it wrote to a given piece of storage. The drive has to assume that that sector contains information that the OS wishes to retain, hence it can't use that sector to replace another in a read-modify-write operation.

Filling a drive with deliberately written, unwanted zeroes uses up *all* of the latitude the drive has for wear-levelling or whatever else, and does so without storing any information that really needs to be stored.
No, not all. They leave themselves a pool of blocks for this purpose. In enterprise solutions this can be a big one.

highlandsun
Posts: 139
Joined: Thu Nov 10, 2005 2:04 am
Location: Los Angeles, CA
Contact:

Post by highlandsun » Tue Feb 03, 2009 11:44 am

m^2 wrote:
nutball wrote: Filling a drive with deliberately written, unwanted zeroes uses up *all* of the latitude the drive has for wear-levelling or whatever else, and does so without storing any information that really needs to be stored.
No, not all. They leave themselves a pool of blocks for this purpose. In enterprise solutions this can be a big one.
Yes, we were already discussing the spare blocks.

viewtopic.php?p=450221#450221

Mohan
Posts: 74
Joined: Wed Nov 21, 2007 2:09 pm
Location: Germany

Post by Mohan » Tue Feb 03, 2009 7:33 pm

dhanson865 wrote: 2. Never put a swap file on an SSD (Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive) )
Hmm... so what would you do in a notebook application where you can't live without a swap file? Not use SSD at all?

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Wed Feb 04, 2009 3:30 am

Mohan wrote:
dhanson865 wrote: 2. Never put a swap file on an SSD (Turn off virtual memory or move the pagefile/swapfile to another drive (a traditional hard drive) )
Hmm... so what would you do in a notebook application where you can't live without a swap file? Not use SSD at all?
If you can afford a SSD, then buying 4 GB shouldn't be a problem?

nutball
*Lifetime Patron*
Posts: 1304
Joined: Thu Apr 10, 2003 7:16 am
Location: en.gb.uk

Post by nutball » Wed Feb 04, 2009 5:41 am

m^2 wrote:No, not all. They leave themselves a pool of blocks for this purpose. In enterprise solutions this can be a big one.
Sorry I'll rephrase my statement.

Say you have a 64GB drive, of which 60GB is presented as usable space, 4GB reserved by the drive for its internal re-orderings.

Say that 5GB of data is written to the drive, then overwritten, then overwritten, then overwritten in such a way that the quantity of data stored on the drive doesn't rise above 5GB (modulo any lazy block reuse the filesystem might do).

If the drive has any sense it will use the 59GB which isn't storing useful data as a pool for its internal re-orderings, rather than the minimum 4GB "reserved" for the purpose.

Filling the drive with 60GB of user-written zeroes means that only the bare minimum will be available for internal purposes, rather than the reserved + unused that it could otherwise use.

redcokehome
Posts: 1
Joined: Thu Feb 05, 2009 6:55 am
Location: HONG KONG

i have already create the most fast ssd 240m/s read with 180

Post by redcokehome » Thu Feb 05, 2009 6:57 am

i have already create the most fast ssd 240m/s read with 180m/s, if you guys interest please contact me .

cmthomson
Posts: 1266
Joined: Sun Oct 09, 2005 8:35 am
Location: Pleasanton, CA

Post by cmthomson » Thu Feb 05, 2009 3:11 pm

There are some misunderstandings in the above posts, and also some good suggestions.

To get a better understanding of SSD architecture and limitations, the AnandTech review of the X25-M is very informative.

Bottom line: what limits the lifetime of a flash-based SSD is the number of times each page (typically 128KB) is erased. After many erasures, the tunnel dielectric wears out (starts to leak electrons causing loss of charge in the storage capacitor). For MLC flash, this is around 10,000 erasures, while for SLC it is around 100,000. (MLC stores two bits per cell using four voltages, while SLC stores one bit using two voltages; the reason SLC lasts longer is that there is more tolerance for variation in the voltages. The reason MLC drives are cheaper is because they have twice the density per chip.)

The number of times a particular page is erased is determined by lots of factors, but the main one is how well the controller does wear-leveling by minimizing page reuse. This involves not only keeping track of how many times a page has been erased, but also which page is freshest to use for the next write. It is complicated by what's called write amplification, which is caused by small (often 4K) random writes causing lots of page-sized (128K) erases.

There are two things that make the Intel controller so much better than the JMicron controller used by eg OCZ: better buffering of short writes (eliminating stutter), and lower write amplification by better management of small random writes (consolidating them into single pages to do fewer erases).


Now to address some of the above comments:

Formatting an SSD does not significantly shorten its life unless the controller is really stupid. That's because it causes about one erase per page as the OS overwrites all the "disk sectors". That is, a format consumes about 0.1% of a disk's lifetime. Don't confuse the OS and the SSD; the OS keeps track of which sectors have valid data, and the SSD keeps track of which pages have been erased more often. As the SSD is used, the mapping between OS sector numbers and SSD page numbers eventually becomes random.

The spare sectors on traditional drives allow for remapping of sectors that get too many ECC errors. On SSDs, the spares are used to replace pages that either are failing (due to ECC errors) or have crossed a threshold for erase cycles (excessive wear that might cause imminent failure). A good SSD controller will use both to extend drive longevity.

Defragmenting an SSD is a mixed bag. It reduces the number of small random writes by consolidating fragmented files onto fewer pages (reducing future erases), but it also does lots of writes, which means lots of current erases. I'd suggest defragmenting about once a week or so.

Although it's tempting to put the swap partition on an SSD to get better performance, it's generally not a good idea. First it shortens the life of the SSD due to extra writing, and second, the performance improvement is minimal if you have sufficient (2-4GB) of RAM. For instance, my system has 2GB of RAM and never needs the page file, even though Windows writes a lot of stuff out there.

A new SSD is extremely unlikely to fail. However, over time as all of us become complacent about this, we're likely to be lax about taking backups. Don't. A regular (preferably automated) backup regime will sooner or later save your butt.

m^2
Posts: 146
Joined: Mon Jan 29, 2007 2:12 am
Location: Poland
Contact:

Post by m^2 » Fri Feb 06, 2009 2:19 am

cmthomson wrote:each page (typically 128KB) is erased.
You misread the review. They say about 128 pages 4K each creating a single erase block. And 0.5 MB block in X25-M is very small, 2-4 MB is more like average.
cmthomson wrote:Defragmenting an SSD is a mixed bag. It reduces the number of small random writes by consolidating fragmented files onto fewer pages (reducing future erases), but it also does lots of writes, which means lots of current erases. I'd suggest defragmenting about once a week or so.
Not really. File fragmentation doesn't cause _any_ additional writes because you write only on the free space, don't you? So defragmenting free space could be beneficial and additionally defragmenting files could be used to prevent future fragmentation, exactly opposite to what hard drives need and what defragmenters do, except for a special version for Diskeeper.

But there's one more thing to think about and I don't know how it is.
Wear leveling uses remapping and what it shows to external devices has little to nothing to physical data layout. What looks as a continuous block for OS is most likely scattered all over the SSD.
Now there's a question: Do they remap pages or erase blocks? If blocks, then there's some connection between physical and logical layout and considering that blocks can be big (like 8 MB), this might make defragmentation help somehow. But to really work well, defragmenter should be aware of erase block size and don't do optimizations over the whole drive - they are pointless, but optimize each block independently.

But I think that SSDs don't remap blocks, but rather pages, it would allow way more efficient wear leveling. If you make a large, continuous write, SSD most likely will write it more or less continuously to make less total writes. So heuristically, some blocks that appear continuous, are continuous, which may make SSD aware defragmenters have some positive effect. And there's one more question: How long does the rule last? After 3 years and 2 OS reinstallations, there will be little logical blocks that aren't scattered physically unless firmware tries to optimize it.

ADDED: I checked one thing that could prevent remapping at the page level: Bigger map needed.
For 64 GB disk (raw capacity including inaccessible reserve) that's ~48 MB or 0.073%.
Grows with capacity, for 1 TB you need 0.085%. Still no problem, I guess.

bgiddins
Posts: 175
Joined: Sun Sep 14, 2008 1:04 am
Location: Australia

Re: i have already create the most fast ssd 240m/s read with

Post by bgiddins » Fri Feb 06, 2009 5:01 am

redcokehome wrote:i have already create the most fast ssd 240m/s read with 180m/s, if you guys interest please contact me .
How about you just post details on the forum?

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Sat Feb 07, 2009 5:51 am

cmthomson wrote:The spare sectors on traditional drives allow for remapping of sectors that get too many ECC errors. On SSDs, the spares are used to replace pages that either are failing (due to ECC errors) or have crossed a threshold for erase cycles (excessive wear that might cause imminent failure). A good SSD controller will use both to extend drive longevity.

Defragmenting an SSD is a mixed bag. It reduces the number of small random writes by consolidating fragmented files onto fewer pages (reducing future erases), but it also does lots of writes, which means lots of current erases. I'd suggest defragmenting about once a week or so.
1. Are you sure wear leveling doesn't kick in from day one? You imply the drive won't use wear leveling to randomize write locations until later in the drives life. I'm under the impression that the wear leveling randomization is continuous or at least happens earlier in the life cycle of the drive.

2. If wear leveling is involved then you should never defrag your SSD. For reasons why I'll just quote and link:

http://forums.storagereview.net/index.p ... opic=25228
imsabbel wrote:
bfg9000 wrote: Flash is traditionally slow with writing because it is block-erased immediately before write operation. Having more contiguous open space available increases the odds that all the required space will already carry 1s or can be block-erased in one go, and that data bits will not need to be read then rewritten into a block.
Thats true, but consider:
Wear leveling will cause all this not to get exposed to the OS at all.
If you are writing block 00001 1000 times, it will always point to a different physical block.
If you overwrite block 0815, then the physical block the data was initially in might not be touched at all, as the wear leveling will point the write to another block.
"contiguous open space", as seen by the OS, and the filesystem, has no meaning on physical layer.

All this logic (distributing writes towards most suitable memory regions, regarding block health status, bursts, ect, will entirely be in the domain of the controller.[inside the SSD]

-SSD may benefit from defragmentation, at least for writing
-There is no mechanism by which current OSes can defragment them (since there is no standard mechanism for even reporting file location to the OS)
Scott C. wrote:
bfg9000 wrote:If the SSD is based on flash memory then defrag can help writing speed by keeping large contiguous blocks of open space available.

Flash is traditionally slow with writing because it is block-erased immediately before write operation. Having more contiguous open space available increases the odds that all the required space will already carry 1s or can be block-erased in one go, and that data bits will not need to be read then rewritten into a block.
This is completely incorrect.

More contiguous space as seen by the OS will have ZERO effect on contiguous space within the SSD unless it has almost no wear-leveling. Plus, defragging will force it to write more, don't do that.

If the OS sees a large contiguous block of free address space, that may be mapped to a ton of little bits here and there. There is no reason to defrag a SSD, EVER unless it doesn't have wear-leveling or bad block re-mapping, in which case you aren't running a consumer OS, you're dealing with firmware on a SSD or similar device.

Yeah, thats right, the firmware does the 'defrag' it needs to find or create empty blocks for writing on its own, in the background. A good one (like Intel's) is doing a lot of this sort of work behind the scenes.

Don't defrag SSD's, internally, everything is fragmented and moving around in there and it won't have ANY effect on contemporary SSDs other than shortening their life span.
And then finally I'll stop with this quote from OCZ that is plastered all over their site
OCZ wrote:IMPORTANT NOTE: Solid State Drives DO NOT require defragmentation. It may decrease the lifespan of the drive.

cmthomson
Posts: 1266
Joined: Sun Oct 09, 2005 8:35 am
Location: Pleasanton, CA

Post by cmthomson » Sat Feb 07, 2009 4:20 pm

dhanson865 wrote:1. Are you sure wear leveling doesn't kick in from day one? You imply the drive won't use wear leveling to randomize write locations until later in the drives life. I'm under the impression that the wear leveling randomization is continuous or at least happens earlier in the life cycle of the drive.

2. If wear leveling is involved then you should never defrag your SSD.
1. Of course wear leveling starts from day one. I'm not sure why you thought I said otherwise, unless it's an inference that defragementation might provide a future benefit.

2. The defrag issue is very subtle. For people wanting really simple rules, the rule should be don't defrag. For those who can tolerate more nuance, it's not that straightforward. Defragmentation of an SSD trades off extra erases now for the potential of fewer erases later. That's because defragmenting has some chance of gathering scattered files into single erase blocks so that when they are replaced later they cause fewer erases. This potential benefit accrues only for large files that are initially written as fragments (eg, real-time audio recording files). FWIW, I do capture lots of long audio files, and they meet this criterion. That's why I recommended infrequent defrags. In my case weekly makes sense, and it happens to fit in with the scheduling ability of my software.

Defragmentation software could help with this subtlety by only rewriting files that exceed the erase block size, and leaving smaller files alone. Diskkeeper claims to have "SSD aware technology", but I have no idea whether this is what they mean.

Aris
Posts: 2299
Joined: Mon Dec 15, 2003 10:29 am
Location: Bellevue, Nebraska
Contact:

Post by Aris » Mon Feb 09, 2009 2:56 pm

cmthomson wrote: Defragmenting an SSD is a mixed bag. It reduces the number of small random writes by consolidating fragmented files onto fewer pages (reducing future erases), but it also does lots of writes, which means lots of current erases. I'd suggest defragmenting about once a week or so.
Every SSD manufacturer i've seen so far has recommended AGAINST ever defragmenting them, and usually gives a warning that it may prematurely kill it.

I'd really like to see where you got your information that brought you to this conclusion.

Plekto
Posts: 398
Joined: Tue Feb 19, 2008 2:08 pm
Location: Los Angeles

Post by Plekto » Mon Feb 09, 2009 4:45 pm

I'd like to add that there is another option that's also a Solid state drive. Ie, the real deal:

The ANS-9010B
http://techreport.com/articles.x/16255

Obviously you need a battery backup on such a system as well(or the optional battery pack), just in case. But it is shockingly fast. None of the issues of a typical SSD, either, and no need to ever defragment.

Of note is the CF slot in front. If power is lost, it can be set to automatically back everything up to a CF drive. This takes about 20 minutes or so(assuming that's 32GB you're running - much less or, say, a perfectly reasonable 8GB boot drive), and the battery pack lasts about 4-6 hours.(plenty of leeway here - more if you have the external power brick plugged into a UPS. Then we're talking days without power).

Intel's option is very expensive. The 1 SATA port version of this can be had for a paltry $250 by comparison. 16 GB of memory is about $150 these days as well, bringing it to about $400. Not entirely unreasonble, IMO, considering the benefits.

dhanson865
Posts: 2198
Joined: Thu Feb 10, 2005 11:20 am
Location: TN, USA

Post by dhanson865 » Tue Feb 10, 2009 6:49 am

cmthomson wrote: 1. Of course wear leveling starts from day one. I'm not sure why you thought I said otherwise
The quote that sounds otherwise to me is:
The spare sectors on traditional drives allow for remapping of sectors that get too many ECC errors. On SSDs, the spares are used to replace pages that either are failing (due to ECC errors) or have crossed a threshold for erase cycles (excessive wear that might cause imminent failure).
That makes it sound like to me that you are saying wear leveling won't kick in until there is an ECC error or has crossed a threshold for erase cycles.

I hope you can see how other people could read it that way even if you didn't mean specifically that.

Post Reply