Article: "SSD Lackluster for Laptops, PCs"
Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee
-
- SPCR Reviewer
- Posts: 8636
- Joined: Sat Nov 23, 2002 6:33 am
- Location: Sunny SoCal
It seems to be a decent article (havent finished page 1 yet), I will catch up with that later.
I spotted - "I think you need to get to 128GB for around $200, and that's going to happen around 2010." - in the article, and I would guesstimate that a 128GB SSD with cost $200 in 9-12 months from now.
Andy
I spotted - "I think you need to get to 128GB for around $200, and that's going to happen around 2010." - in the article, and I would guesstimate that a 128GB SSD with cost $200 in 9-12 months from now.
Andy
-
- Patron of SPCR
- Posts: 744
- Joined: Tue Mar 04, 2008 4:05 am
- Location: London
- Contact:
Other than the super high end SSD's, speeds are still not that great (particularly writes), and for the price its easy to see why most people are waiting. Give it 2 or 3 years and even the cheaper ones will be much faster than current mechanical drives and at a price where it will be worth it for alot more people (although it'll be a looong time before they match $/GB of 3.5" drives)
-
- Posts: 139
- Joined: Thu Nov 10, 2005 2:04 am
- Location: Los Angeles, CA
- Contact:
ehh... As a software developer on Linux, I'm happy I made the jump already. Read speeds are terrific, and write speeds are irrelevant because writes are absorbed by the filesystem cache. On a crummy OS like Windows with a braindead and non-tunable memory management policy, you're kinda stuck. But on an OS like Linux that lets you tune size and timing parameters of the cache, no problem...
Faster boots, faster compile times, It's all good.
Faster boots, faster compile times, It's all good.
I went and looked at some of the reviews on newegg from people who are actually buying these things -- specificly the OCZ Core 64GB (from the article on hothardware linked by Mike).
in addition to slower than "expected" write speeds there seems to be quite a number of issues with a lack of multi-tasking for the drives. Based on what I saw, when multiple programs attempt to write, the drive will hang for a second or 2 while it sorts things out.
Granted not everyone had this problem, but those with issues had consistant symptoms.
leaves me scratching my head even more, as i want to make the jump away from scsi, but not loose disk i/o performance.
in addition to slower than "expected" write speeds there seems to be quite a number of issues with a lack of multi-tasking for the drives. Based on what I saw, when multiple programs attempt to write, the drive will hang for a second or 2 while it sorts things out.
Granted not everyone had this problem, but those with issues had consistant symptoms.
leaves me scratching my head even more, as i want to make the jump away from scsi, but not loose disk i/o performance.
I believe there has been specific issues with the first generation OCZ drives on writes and timeouts.
I got two small SSDs. Cheap MLC stuff of the new high speed stuff. One drive is a supertalent, the other is a no-name OEM thingy (the drive firmware just says SATA SSD "))
They do 120MB/sec write, random reads that beat a high end RAID5 with 6 15K rpms SAS drive.
Sequential writes are decent (70MB/sec), random writes sucks, but on linux, you can tune a fair bit, although some of the advice I see on the net, does not match the benchmarks I have done.
All in all, the 60GB supertalent does a very nice job as a root drive in my HTPC. Allows me to tell linux to spin down all the other drives when its not doing anything.
Performance feels better than it did with a HD and there is absolutely no delay from spinning up the drive if it suspends.
The random writes are not a big issue there at all.
I can however with cleverly written benchmarks make the cheap SSD choke at just 4 small random writes per second. If you have an I/O pattern that trigger this, you will be truly miserable
I got two small SSDs. Cheap MLC stuff of the new high speed stuff. One drive is a supertalent, the other is a no-name OEM thingy (the drive firmware just says SATA SSD "))
They do 120MB/sec write, random reads that beat a high end RAID5 with 6 15K rpms SAS drive.
Sequential writes are decent (70MB/sec), random writes sucks, but on linux, you can tune a fair bit, although some of the advice I see on the net, does not match the benchmarks I have done.
All in all, the 60GB supertalent does a very nice job as a root drive in my HTPC. Allows me to tell linux to spin down all the other drives when its not doing anything.
Performance feels better than it did with a HD and there is absolutely no delay from spinning up the drive if it suspends.
The random writes are not a big issue there at all.
I can however with cleverly written benchmarks make the cheap SSD choke at just 4 small random writes per second. If you have an I/O pattern that trigger this, you will be truly miserable
-
- Posts: 316
- Joined: Thu Aug 10, 2006 11:07 am
-
- Posts: 139
- Joined: Thu Nov 10, 2005 2:04 am
- Location: Los Angeles, CA
- Contact:
I bought a 120Gb Core V2 SSD for my HP dv5z laptop. It definitely has problems on Vista, as already described by many other posts all over. On Linux I set my cache timeout to 10 minutes.
# grep vm /etc/sysctl.conf
vm.dirty_writeback_centisecs = 60000
vm.dirty_expire_centisecs = 60000
vm.dirty_ratio = 80
vm.laptop_mode = 5
vm.swappiness = 0
It's important that you have sufficient RAM (I have 4GB in the laptop) and you never swap to the SSD (so, vm.swappiness=0). These drives definitely bog down if you try to do any other operations while any write is in progress. So the trick is to hold write data in cache for as long as possible, to reduce their frequency to near zero, so that only reads are seen by the SSD.
# grep vm /etc/sysctl.conf
vm.dirty_writeback_centisecs = 60000
vm.dirty_expire_centisecs = 60000
vm.dirty_ratio = 80
vm.laptop_mode = 5
vm.swappiness = 0
It's important that you have sufficient RAM (I have 4GB in the laptop) and you never swap to the SSD (so, vm.swappiness=0). These drives definitely bog down if you try to do any other operations while any write is in progress. So the trick is to hold write data in cache for as long as possible, to reduce their frequency to near zero, so that only reads are seen by the SSD.