InnoDB is dead. Long live InnoDB!

I find myself converting more and more customers’ databases to InnoDB plugin. In one case, it was a last resort: disk space was running out, and plugin’s compression released 75% space; in another, a slow disk made for IO bottlenecks, and plugin’s improvements & compression alleviated the problem; in yet another, I used the above to fight replication lag on a stubborn slave.

In all those case, I needed to justify the move to “new technology”. The questions “Is it GA? Is it stable?” are being asked a lot. Well, just a few days ago the MySQL 5.1 distribution started shipping with InnoDB plugin 1.0.4. That gives some weight to the stability question when facing a doubtful customer.

But I realized that wasn’t the point.

Before InnoDB plugin was first announced, little was going on with InnoDB. There were concerns about the slow/nonexistent progress on this important storage engine, essentially the heart of MySQL. Then the plugin was announced, and everyone went happy.

The point being, since then I only saw (or was exposed to, at least) progress on the plugin. The way I understand it, the plugin is the main (and only?) focus of development. And this is the significant thing to consider: if you’re keeping to “old InnoDB”, fine – but it won’t get you much farther; you’re unlikely to see great performance improvements (will 5.4 make a change? An ongoing improvement to InnoDB?). It may eventually become stale.

Converting to InnoDB plugin means you’re working with the technology at focus. It’s being tested, benchmarked, forked, improved, talked about, explained. I find this to be a major motive.

So, long live InnoDB Plugin! (At least till next year, that is, when we may all find ourselves migrating to PBXT)

16 thoughts on “InnoDB is dead. Long live InnoDB!

  1. Synchronous replication isn’t the only way to go… sure, it’s bearable on small clusters, but when you have tens of slaves, it gets painful.

    Here at GenieDB, we’ve done some tricks with asynchronous replication combined with synchronously updating a chosen-per-record-with-a-hash-of-the-PK ‘consistency buffer’ server that stores the record in-memory for long enough for the asynch replication to happen, thereby getting around the problem with a small latency increase (the consistency buffer is highly optimised for low latency; we use memcache!) that doesn’t grow as the number of replicas does.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.