RAID and SSD's
Moderators: NeilBlanchard, Ralf Hutter, sthayashi, Lawrence Lee
RAID and SSD's
Thank you all for trying to help, but there's no need to reply anymore.
Using RAID on a single HDD is a bad idea for obvious reasons,
but is it the same for SSD's?
The reason I'm asking is because I don't know how SSD's work, and what their bottlenecks actually are.
Perhaps they must be optimized for it, and maybe have more than one controller system/chip, but I wouldn't be surprised if special RAID SSD's would show up in the near future. Well, that is if it doesn't exist already.
Edit: Found this, but that's not really a standard SSD . . .
Using RAID on a single HDD is a bad idea for obvious reasons,
but is it the same for SSD's?
The reason I'm asking is because I don't know how SSD's work, and what their bottlenecks actually are.
Perhaps they must be optimized for it, and maybe have more than one controller system/chip, but I wouldn't be surprised if special RAID SSD's would show up in the near future. Well, that is if it doesn't exist already.
Edit: Found this, but that's not really a standard SSD . . .
Last edited by Mats on Tue Sep 09, 2008 9:54 am, edited 1 time in total.
Can you explain why you want to do this in the first place?
Fundamentally, there are two reasons to run raid, failover and speed. If you stripe a logical volume across two disks you'll have roughly twice the performance; if you mirror a volume on two drives you can afford 'losing' one of them.
None of these reasons apply if you raid different partitions of a single drive, regardless of the type of drive -- the pipe is only so big, and if the drive goes poof then that's it.
That's why it doesn't seem to make sense.
Fundamentally, there are two reasons to run raid, failover and speed. If you stripe a logical volume across two disks you'll have roughly twice the performance; if you mirror a volume on two drives you can afford 'losing' one of them.
None of these reasons apply if you raid different partitions of a single drive, regardless of the type of drive -- the pipe is only so big, and if the drive goes poof then that's it.
That's why it doesn't seem to make sense.
A HDD is most of all limited by the mechanical construction: The read/write head.
It can only read or write at one place at the time (all heads move together), even though that means the same spot on each platter side.
I'm not totally sure what is limiting a SSD, but here I'm guessing it's the actual reading/writing to the memory chips.
The big difference between the two is that the SSD is not limited in the way the HDD is: theoretically it can read and write data on mulitple places at the same time.
If this is true, then it is possible that the true limiting factor is how fast it can read/write to each memory chip. So in terms of limitations, one memory chip equals one HDD.
The solution is of course to use RAID. Partition the SSD so that each volume is placed exactly on one or more memory chips, depending on desired size.
Since this is done on one SSD it's obviously for getting better read performance. Write speed would be the same, but read speed would be higher since the system can access the same data from each partition.
It can only read or write at one place at the time (all heads move together), even though that means the same spot on each platter side.
I'm not totally sure what is limiting a SSD, but here I'm guessing it's the actual reading/writing to the memory chips.
The big difference between the two is that the SSD is not limited in the way the HDD is: theoretically it can read and write data on mulitple places at the same time.
If this is true, then it is possible that the true limiting factor is how fast it can read/write to each memory chip. So in terms of limitations, one memory chip equals one HDD.
The solution is of course to use RAID. Partition the SSD so that each volume is placed exactly on one or more memory chips, depending on desired size.
Since this is done on one SSD it's obviously for getting better read performance. Write speed would be the same, but read speed would be higher since the system can access the same data from each partition.
Okay, so you're doing this for speed. Thanks.
It's an interesting line of thought you've got going there!
I could see a number of problems, like how to determine how to place the partitions in order to match them up with actual chips, or whether internal wear leveling would not totally invalidate that, or whether an SSD drive really has a wider pipe on the interface than internally (which is what you are implying if you want any benefit from concurrent read/writes).
But instead of that, I'll wish you the best of luck, and ask that you try to do some good speed tests and report back.
It's an interesting line of thought you've got going there!
I could see a number of problems, like how to determine how to place the partitions in order to match them up with actual chips, or whether internal wear leveling would not totally invalidate that, or whether an SSD drive really has a wider pipe on the interface than internally (which is what you are implying if you want any benefit from concurrent read/writes).
But instead of that, I'll wish you the best of luck, and ask that you try to do some good speed tests and report back.
Tusen tack!!
If you know that the data is stored in some kind of order like from chip 1 to n and not in parallel like in HDD's, then it's quite easy to place the partitions for testing: make margins.
Place small, unused partitions between the ones you use.
But hey, now that I think of it, this seems like a contradiction . .
The SSD must access the chips in parallel, that's the fastest way to do it, just like it is for HDD's. And if that's true, then it won't work.
Of course it spreads the data over all chips. . .
If you know that the data is stored in some kind of order like from chip 1 to n and not in parallel like in HDD's, then it's quite easy to place the partitions for testing: make margins.
Place small, unused partitions between the ones you use.
But hey, now that I think of it, this seems like a contradiction . .
The SSD must access the chips in parallel, that's the fastest way to do it, just like it is for HDD's. And if that's true, then it won't work.
Of course it spreads the data over all chips. . .
Re: RAID and SSD's
If you partition an SDD in two, you are not rewarded with twice as many memory controllers, so it won't operate twice as fast.
Re: RAID and SSD's
True, the whole idea was based around the idea that the memory controller was much faster than the memory chips.yefi wrote:If you partition an SDD in two, you are not rewarded with twice as many memory controllers, so it won't operate twice as fast.
But, as I've already said, the chips are probably already accessed in parallel anyway so it doesn't matter.
I'm not sure on the design of SSDs, but do know a bit about NAND flash (as an embedded software developer). I'd assume that the SSD has a controller chip that talks SATA on one side and to all the NAND flash chips the other.Mats wrote:I'm not totally sure what is limiting a SSD, but here I'm guessing it's the actual reading/writing to the memory chips.
The big difference between the two is that the SSD is not limited in the way the HDD is: theoretically it can read and write data on mulitple places at the same time.
If this is true, then it is possible that the true limiting factor is how fast it can read/write to each memory chip. So in terms of limitations, one memory chip equals one HDD.
Therefore it should be configured to use each block of each device most efficiently and interleave them. So it will be the command sent to the SSD that would control how much data to read/write, and therefore how many of the NAND flash devices would be used in parallel, ranging from 1 to all. An example: If each block is 128kb, then if you need to read a 4MB file, you need to read 256 blocks. If the SSD has 32 NAND flash devices, that's 8 each - and assuming it can buffer all the data would make it pretty quick.
Of course, it's up to up to the SSD developer as to how the device works, which could make a real difference to the performance. I'd doubt that RAID over 2 partitions of an SSD would make it any faster. If the SSD isn't doing anything clever internally then RAID could help, provided you know the exact setup and configure the RAID to match.
CoolGav: It looks like you didn't read the whole thread.
Mats wrote:But hey, now that I think of it, this seems like a contradiction . .
The SSD must access the chips in parallel, that's the fastest way to do it, just like it is for HDD's. And if that's true, then it won't work.
Of course it spreads the data over all chips. . .
OK, do you know what bad blocks are, and garbage collection, and block erasing, wear levelling, etc. The controller on the SSD has to take all these into consideration, which determine what it's doing, and where the data lives. So without detailed information on how a specific SSD actually works it's not possible to know for a given scenario if it's going to give the best performance. NAND flash is not really like a hard drive, SSDs look like hard drives to the outside world, but there's as much technical detail as for controlling the mechanics of a hard drive.
I really don't see what your point is.CoolGav wrote:OK, do you know what bad blocks are, and garbage collection, and block erasing, wear levelling, etc. The controller on the SSD has to take all these into consideration, which determine what it's doing, and where the data lives. So without detailed information on how a specific SSD actually works it's not possible to know for a given scenario if it's going to give the best performance.
I posted five days ago that I didn't think it would work.
I do know that . .CoolGav wrote:NAND flash is not really like a hard drive, SSDs look like hard drives to the outside world, but there's as much technical detail as for controlling the mechanics of a hard drive.
-
- Posts: 91
- Joined: Sun Nov 14, 2004 3:32 pm
- Location: USA
Interleaving will slow down performance if you try it on one SSD. At best there will be no performance gain. RAID will issue concurrent requests thinking it can get two piece of data at the same time, but the SSD will just make it wait until the first request is finished.I'm not totally sure what is limiting a SSD, but here I'm guessing it's the actual reading/writing to the memory chips.
The big difference between the two is that the SSD is not limited in the way the HDD is: theoretically it can read and write data on mulitple places at the same time.
Personally, I think anyone who runs RAID0 on non-expendable data is mad.