Independent Mtron SSD + MyISAM Benchmarks
Big DBA Head has run some independent MySQL benchmarks with the Mtron SSD drives that I’ve been playing with.
Great to see that we’re coming to the same conclusions. It’s nice to have your research validated.
Run time dropped from 1979 seconds on a single raptor drive to 379 Seconds on the standalone Mtron drive. An improvement of over 5X. Based on the generic disk benchmarks I would have thought I would have seen slightly lower runtimes.
I think he might be missing out on one advantage. If you’re more than 50% writes you would probably see lower throughput but you can have a MUCH larger database and when you NEED to do random reads you can easily get access to your data – and quickly!
HDD prevents this since you’re saturating the disk IOPS with writes which prevents you from doing any reads.
This allows you to take the money you’re paying for memory and get 10x more for a slight performance hit.
I suspect these problems will be resolved in the next six months.
There are a few possibilities to solving this issue:
* Someone will write a Linux block driver that implements a log structured filesystem. You can then export the SSD as a LSFS and re-mount it as XFS. The random write performance will then soar and you’ll have features of XFS like constant time snapshots.
* Log structured databases like PBXT will be tuned for SSD which will increase the performance.
* Someone could patch InnoDB to handle 4k pages. I tried to compile InnoDB with 8k pages and it just dumped core. I think the performance of InnoDB will really shine on SSD once this has been fixed. One other potential problem is that during my tests InnoDB became CPU bound when working with the buffer pool. I’m not sure what the problem here is though.
* SSD vendors will implement native LSFS on the drives themselves. This will also help with wear leveling and help negate the problems with the flash translation layers in the drives. I suspect STEC is already doing this.
* No Flash Translation Layer. Instead of a block device they could be exported as MTD devices. This could boost performance with filesystems that were MTD aware.
* Raw random write IOPS performance upgrades on the drives themselves. Instead of only 180 random write IOPS they we could see drives doing 1k random write IOPS in Q2.