MySQL’s Row Based Replication (RBR) succeeds (though not replaces) Statement Based Replication (SBR), as of version 5.1.
Anyone who is familiar with replication data drift — the unexplained growing data difference between master & slave — might wish to look into row based replication. On multiple servers I’m handling the change to RBR has eliminated (to the best of my findings) replication data drift.
This is easily explained, as RBR writes particular row IDs into the binary log. You no longer need to hope the statement
DELETE FROM my_table ORDER BY my_column LIMIT 100
acts deterministically on all servers (is my_column UNIQUE?). With row based replication the particular rows deleted on the master are then deleted on the slave.
Here are three notes on RBR:
- Binary log size:
With statement based replication the binary log merely logs the statement – typically a short text. But RBR logs the row IDs and changes made to data.
After we finally got used to cheap huge disks, and forgot about the need for cleaning up the binary logs, RBR introduces bloated logs. On some servers I’am again confounded by the fact that 3 days worth of logs will hog my entire disk space.
- Forgiveness
RBR is not as forgiving as SBR. With SBR, you could DELETE some rows from the master. If they’re already missing on the slave, no problem here. With RBR, such an incident makes for replication failure (RBR: “I’m supposed to DELETE rows 3, 4, 5 but can’t find them! I can’t to my job like this! Heeelp!”)
This is not a bad thing: you get an early alert that something went wrong.
- Less slave effort
On the up side, the slave does not need to do much thinking. Given a DELETE command, the slave does not need to look up those rows WHERE some_condition IS TRUE. Instead, the master does all the hard work, the slave just gets the IDs of rows to delete.
I find that this boosts up replication speed in some scenarios, and in particular the scenario of data cleanup: those nightly/weekly purging of old, unused data. If you look hard and all you find are 5 rows to delete, all the slave needs to do is to delete those indicated 5 rows. With single-threaded replication this makes a real difference.
Shlomi, I don’t know of anything that would guarantee that the InnoDB row ID on a slave is the same as that of the same logical row on a master. If the slave is created by cloning binary files it will start that way but different ordering of inserts and deletes on master and slave is something that I expect to cause them to rapidly become out of sync. If the slave is created from a mysqldump backup they could start out very different.
There have been suggestions to consider using the InnoDB row ID but those practical difficulties have so far blocked it. For some installations it might be a useful clue that could be rejected if the rest of the row turns out not to match.
Deleting in batches is a good approach. Depending on the application multiple batches can be used. Say very different WHERE clauses so no part of the range can overlap and let one delete block another.
Smaller batches also have the advantage of making it easier for the purge thread to keep up and helping to keep all of the undo log records in the buffer pool, much faster than accumulating a large batch that can then become disk-bound if they have been flushed from the pool.
No concurrency benefit for a slave from multiple batches on the slave at the moment, maybe in the future.
With RBR, you also need to have stricly the same definition for your columns (data type for exemple)
Of course one also has to note that if you do ‘DELETE FROM t’ on a table with 1m rows it will be 1m events on the slave too – I’ve seen slaves not able to keep up with that and so SESSION based statement format comes in handy or try MIXED 🙂
If one has to delete all rows in a table, then one should use TRUNCATE TABLE. RBR treats this statement as DDL and thus logs it verbatim.
For session tables one often has this situation: delete all but some 1000 of a million rows. This should be implemented as:
create shadow table like the original
copy rows to keep
exchange tables by RENAME TABLE
Again this case is handled efficiently on the slave.
@James,
Replication is blocked during a large SBR DELETE. This is an important reason for chunking big deletes.
@James, etc,
There is no “row_id” for InnoDB tables. It depends on the PRIMARY KEY. If you do not provide such, then one is provided for you. (And I guess it could ‘drift’.)
Deleting in batches:
http://mysql.rjweb.org/doc.php/deletebig
Usually an ID range is sufficient to be sure the Slave(s) are deleting the same batch. (There is no way to suppress the warning, even when it is clearly wrong.)
The best way to do a big delete is via DROP PARTITION, if your table can be structured to take advantage of such. It is instantaneous, atomic, etc.
@XL,
The shadow table is very efficient, but it can have a flaw — what if row(s) are INSERTed/DELETEd/UPDATEd during the ‘copy’? One solution, albeit clumsy, is a TRIGGER.
@Shlomi,
Sorry, I guess we hijacked your RBR discussion to talk about DELETEs.