SSD + RAID sequential read performance falloff.

This IOPS distribution is very interesting.

I’m playing with a RAID array of 5x Intel X-25E drives.

It turns out they need a lot of tuning. I’ll blog about this later.

What is more interesting is this distribution of IOPS across threads.

This is using sysbench and the seqrd file IO test.

My hypothesis is that it’s wear leveling the ext3 block group.

I also tried using ext3 striding and that did yield a performance boost. I’m going to try to use XFS but I’m on a CentOS box or testing.

Anyone have another theory as to what could be causing this?

I can’t wait until RAID is dead.

Update: This problem actually happens on a single 1x Intel X-25E so it seems like a hardware issue.

Update 2: It turns out that this is NOT a bug with the Intel SSD. I’m pretty sure it’s a bug (or misconfiguration on my part) of sysbench. Performing the same tests with ‘dd’ shows that IO scales linearly up until at least 10 parallel sequential reads. So something else is broken here.



%d bloggers like this: