Notes on row based replication

July 4, 2012

MySQL's Row Based Replication (RBR) succeeds (though not replaces) Statement Based Replication (SBR), as of version 5.1.

Anyone who is familiar with replication data drift -- the unexplained growing data difference between master & slave -- might wish to look into row based replication. On multiple servers I'm handling the change to RBR has eliminated (to the best of my findings) replication data drift.

This is easily explained, as RBR writes particular row IDs into the binary log. You no longer need to hope the statement

DELETE FROM my_table ORDER BY my_column LIMIT 100

acts deterministically on all servers (is my_column UNIQUE?). With row based replication the particular rows deleted on the master are then deleted on the slave.

Here are three notes on RBR:

  • Binary log size:

With statement based replication the binary log merely logs the statement - typically a short text. But RBR logs the row IDs and changes made to data.

After we finally got used to cheap huge disks, and forgot about the need for cleaning up the binary logs, RBR introduces bloated logs. On some servers I'am again confounded by the fact that 3 days worth of logs will hog my entire disk space.

  • Forgiveness

RBR is not as forgiving as SBR. With SBR, you could DELETE some rows from the master. If they're already missing on the slave, no problem here. With RBR, such an incident makes for replication failure (RBR: "I'm supposed to DELETE rows 3, 4, 5 but can't find them! I can't to my job like this! Heeelp!")

This is not a bad thing: you get an early alert that something went wrong.

  • Less slave effort

On the up side, the slave does not need to do much thinking. Given a DELETE command, the slave does not need to look up those rows WHERE some_condition IS TRUE. Instead, the master does all the hard work, the slave just gets the IDs of rows to delete.

I find that this boosts up replication speed in some scenarios, and in particular the scenario of data cleanup: those nightly/weekly purging of old, unused data. If you look hard and all you find are 5 rows to delete, all the slave needs to do is to delete those indicated 5 rows. With single-threaded replication this makes a real difference.

tags:
posted in MySQL by shlomi

« | »

Follow comments via the RSS Feed | Leave a comment | Trackback URL

13 Comments to "Notes on row based replication"

  1. James Day wrote:

    Two more notes on RBR:

    Unless you don't care at all about performance, ensure that all tables have a unique key. Having to do a full table scan to delete or update a row isn't fast.

    In 5.6 previews you can turn off logging of the before image of a row and potentially greatly reduce the size of the binary log size if and only if you know that all tables have a primary key and are identical in all respects except indexes. More at http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#sysvar_binlog_row_image .

    Views are my own, for Oracle's official view consult a PR person.

    James Day, MySQL Senior Principal Support Engineer, Oracle.

  2. shlomi wrote:

    @James,
    In the case of InnoDB, isn't InnoDB using an internal row_id for these cases? It it at all guaranteed that a replica would have the same row_id for same rows?

  3. Daniël van Eeden wrote:

    Don't forget binlog_format=MIXED as that uses RBR for unsafe statements and SBR for all other statements.

  4. Robert Hodges wrote:

    Hi Shlomi! On your third point it's not clear that there is always less slave effort. For instance, running a large DELETE with a WHERE can be more efficient than repeating the DELETE for each row. This effect is very pronounced on large tables.

  5. shlomi wrote:

    @Daniël,
    Thanks

    @Robert,
    Thanks - this is an interesting issue since I almost don't do large DELETEs anymore - when they're large, I split them into smaller chunks, like 1,000 at a time, and do many steps. This makes for smaller transactions, albeit separate ones.
    It is this very scenario where I see an improvement with RBR.

    This is not to say there are no cases where large DELETEs are in place. I'm just not too familiar with these.

    At any case I did my best to sound vague and ambiguous in the last paragraph so that no one could blame me for being incorrect ;)

  6. James Day wrote:

    Shlomi, I don't know of anything that would guarantee that the InnoDB row ID on a slave is the same as that of the same logical row on a master. If the slave is created by cloning binary files it will start that way but different ordering of inserts and deletes on master and slave is something that I expect to cause them to rapidly become out of sync. If the slave is created from a mysqldump backup they could start out very different.

    There have been suggestions to consider using the InnoDB row ID but those practical difficulties have so far blocked it. For some installations it might be a useful clue that could be rejected if the rest of the row turns out not to match.

    Deleting in batches is a good approach. Depending on the application multiple batches can be used. Say very different WHERE clauses so no part of the range can overlap and let one delete block another.

    Smaller batches also have the advantage of making it easier for the purge thread to keep up and helping to keep all of the undo log records in the buffer pool, much faster than accumulating a large batch that can then become disk-bound if they have been flushed from the pool.

    No concurrency benefit for a slave from multiple batches on the slave at the moment, maybe in the future.

  7. Cédric wrote:

    With RBR, you also need to have stricly the same definition for your columns (data type for exemple)

  8. Jervin R wrote:

    Of course one also has to note that if you do 'DELETE FROM t' on a table with 1m rows it will be 1m events on the slave too - I've seen slaves not able to keep up with that and so SESSION based statement format comes in handy or try MIXED :-)

  9. XL wrote:

    If one has to delete all rows in a table, then one should use TRUNCATE TABLE. RBR treats this statement as DDL and thus logs it verbatim.

    For session tables one often has this situation: delete all but some 1000 of a million rows. This should be implemented as:

    create shadow table like the original
    copy rows to keep
    exchange tables by RENAME TABLE

    Again this case is handled efficiently on the slave.

  10. Rick James wrote:

    @James,
    Replication is blocked during a large SBR DELETE. This is an important reason for chunking big deletes.

    @James, etc,
    There is no "row_id" for InnoDB tables. It depends on the PRIMARY KEY. If you do not provide such, then one is provided for you. (And I guess it could 'drift'.)

    Deleting in batches:
    http://mysql.rjweb.org/doc.php/deletebig
    Usually an ID range is sufficient to be sure the Slave(s) are deleting the same batch. (There is no way to suppress the warning, even when it is clearly wrong.)

    The best way to do a big delete is via DROP PARTITION, if your table can be structured to take advantage of such. It is instantaneous, atomic, etc.

    @XL,
    The shadow table is very efficient, but it can have a flaw -- what if row(s) are INSERTed/DELETEd/UPDATEd during the 'copy'? One solution, albeit clumsy, is a TRIGGER.

    @Shlomi,
    Sorry, I guess we hijacked your RBR discussion to talk about DELETEs.

  11. shlomi wrote:

    @Rick,
    afaik there is an internal row_id for InnoDB. I may be wrong on this.

  12. Rhys wrote:

    RE: The limitation / feature mentioned in Forgiveness. There is the slave_exec_mode variable to get around this.

    http://dev.mysql.com/doc/refman/5.1/en/replication-rbr-usage.html

    Probably not a good idea though.

  13. shlomi wrote:

    @Rhys,
    Very nice; good to know this variable exists.

Leave Your Comment

 
Powered by Wordpress and MySQL. Theme by openark.org