Thoughts on InnoDB Page Compression
I spend the last couple days playing with InnoDB page compression on the latest Percona build.
I’m pretty happy so far with Percona and the latest InnoDB changes.
Compression wasn’t living up to my expectations though.
I think the biggest problem is that the compression can only use one core in replication and ALTER TABLE statements.
We have an 80GB database that was running on 96GB boxes filled with RAM.
I wanted to try to run this on much smaller instances (32GB-48GB boxes) by compressing the database.
Unfortunately, after 24 hours of running an ALTER TABLE which would only use one core per table, the SQL replication thread went to 100% and started falling behind fast.
I think what might be happening is that the InnoDB page buffer is full because it can’t write to the disk fast enough which causes the insert thread to force compression of the pages in the foreground.
Having InnoDB only use one core / thread to compress pages seems like a very bad idea (especially on 8-16 core boxes, I’m testing on an 8 core box now but we have 16 core boxes in production).
The InnoDB page compression documentation doesn’t seem to yield any hints about when InnoDB pages are compressed and in which thread. Nor does there seem to be any configuration variables that we can change in this regard.
Perhaps a ‘compressed buffer pool only’ option could be interesting.
This way InnoDB does not have to maintain an LRU for compressed/decompressed pages. Further, it can read pages off disk, decompress them, and then leave the pages decompressed in a small buffer. Then a worker thread (executing on another core) can compress the pages and move them back into the buffer pool where they can be stored and placed back on disk.
This process could still become disk bottlenecked but at least it would use multiple cores.