Notes on row based replication

MySQL’s Row Based Replication (RBR) succeeds (though not replaces) Statement Based Replication (SBR), as of version 5.1.

Anyone who is familiar with replication data drift — the unexplained growing data difference between master & slave — might wish to look into row based replication. On multiple servers I’m handling the change to RBR has eliminated (to the best of my findings) replication data drift.

This is easily explained, as RBR writes particular row IDs into the binary log. You no longer need to hope the statement

DELETE FROM my_table ORDER BY my_column LIMIT 100

acts deterministically on all servers (is my_column UNIQUE?). With row based replication the particular rows deleted on the master are then deleted on the slave.

Here are three notes on RBR:

  • Binary log size:

With statement based replication the binary log merely logs the statement – typically a short text. But RBR logs the row IDs and changes made to data.

After we finally got used to cheap huge disks, and forgot about the need for cleaning up the binary logs, RBR introduces bloated logs. On some servers I’am again confounded by the fact that 3 days worth of logs will hog my entire disk space.

  • Forgiveness

RBR is not as forgiving as SBR. With SBR, you could DELETE some rows from the master. If they’re already missing on the slave, no problem here. With RBR, such an incident makes for replication failure (RBR: “I’m supposed to DELETE rows 3, 4, 5 but can’t find them! I can’t to my job like this! Heeelp!”)

This is not a bad thing: you get an early alert that something went wrong.

  • Less slave effort

On the up side, the slave does not need to do much thinking. Given a DELETE command, the slave does not need to look up those rows WHERE some_condition IS TRUE. Instead, the master does all the hard work, the slave just gets the IDs of rows to delete.

I find that this boosts up replication speed in some scenarios, and in particular the scenario of data cleanup: those nightly/weekly purging of old, unused data. If you look hard and all you find are 5 rows to delete, all the slave needs to do is to delete those indicated 5 rows. With single-threaded replication this makes a real difference.

13 thoughts on “Notes on row based replication

  1. Two more notes on RBR:

    Unless you don’t care at all about performance, ensure that all tables have a unique key. Having to do a full table scan to delete or update a row isn’t fast.

    In 5.6 previews you can turn off logging of the before image of a row and potentially greatly reduce the size of the binary log size if and only if you know that all tables have a primary key and are identical in all respects except indexes. More at http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_binlog_row_image .

    Views are my own, for Oracle’s official view consult a PR person.

    James Day, MySQL Senior Principal Support Engineer, Oracle.

  2. @James,
    In the case of InnoDB, isn’t InnoDB using an internal row_id for these cases? It it at all guaranteed that a replica would have the same row_id for same rows?

  3. Hi Shlomi! On your third point it’s not clear that there is always less slave effort. For instance, running a large DELETE with a WHERE can be more efficient than repeating the DELETE for each row. This effect is very pronounced on large tables.

  4. @Daniël,
    Thanks

    @Robert,
    Thanks – this is an interesting issue since I almost don’t do large DELETEs anymore – when they’re large, I split them into smaller chunks, like 1,000 at a time, and do many steps. This makes for smaller transactions, albeit separate ones.
    It is this very scenario where I see an improvement with RBR.

    This is not to say there are no cases where large DELETEs are in place. I’m just not too familiar with these.

    At any case I did my best to sound vague and ambiguous in the last paragraph so that no one could blame me for being incorrect 😉

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.