InnoDB is dead. Long live InnoDB!

I find myself converting more and more customers’ databases to InnoDB plugin. In one case, it was a last resort: disk space was running out, and plugin’s compression released 75% space; in another, a slow disk made for IO bottlenecks, and plugin’s improvements & compression alleviated the problem; in yet another, I used the above to fight replication lag on a stubborn slave.

In all those case, I needed to justify the move to “new technology”. The questions “Is it GA? Is it stable?” are being asked a lot. Well, just a few days ago the MySQL 5.1 distribution started shipping with InnoDB plugin 1.0.4. That gives some weight to the stability question when facing a doubtful customer.

But I realized that wasn’t the point.

Before InnoDB plugin was first announced, little was going on with InnoDB. There were concerns about the slow/nonexistent progress on this important storage engine, essentially the heart of MySQL. Then the plugin was announced, and everyone went happy.

The point being, since then I only saw (or was exposed to, at least) progress on the plugin. The way I understand it, the plugin is the main (and only?) focus of development. And this is the significant thing to consider: if you’re keeping to “old InnoDB”, fine – but it won’t get you much farther; you’re unlikely to see great performance improvements (will 5.4 make a change? An ongoing improvement to InnoDB?). It may eventually become stale.

Converting to InnoDB plugin means you’re working with the technology at focus. It’s being tested, benchmarked, forked, improved, talked about, explained. I find this to be a major motive.

So, long live InnoDB Plugin! (At least till next year, that is, when we may all find ourselves migrating to PBXT)

16 thoughts on “InnoDB is dead. Long live InnoDB!

  1. Despite these improvements there will always be replication lag to fight with standard MySQL replication. A better solution is to use a reliable replication product such as dbShards which offers sychronous replication and guarantees that transactions are not lost in the case of the master database server failing.

  2. Andy,

    Could you please explain why would synchronous replication help when the slave find it hard to follow? Will a synchronous replication not keep back writing speed on the master instead?

  3. Synchronous replication means that the commit does not complete until the transaction is replicated to the slave server (not necessarily replicated to the database on the slave server, just to memory or log file). This does reduce write speed to the master – typically we see a 10% reduction in throughput with dbShards – but it means that if the master fails then no transactions are lost and the client applications can simply failover to the slave.

  4. Shlomi,

    Nice post. Sync replications guarantees that there is no lag as all (or most if using quorum) servers must commit the transaction at the same time. The lag is eliminated by rate limiting at commit time.

    I have not read much about dbShards but most of the libraries that support sync replication above the db server (in middleware or client libraries) impose limits on concurrency. Does dbShards allow: all transactions to run concurrently, transactions on different tables to run concurrently, group commit?

  5. dbShards does allow concurrent transactions but guarantees that transactions are replicated in the order they are commited against the master database. We don’t specifically do anything to support group commit so I’m not sure that we do support that.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.