Parameters to use on mysqldump

October 13, 2008

mysqldump is commonly used for making a MySQL database backup or for setting up a replication.

As in all mysql binaries, there are quite a few parameters to mysqldump. Some are just niceties but some flags are a must. Of course, choosing the parameters to use greatly depends on your requirements, database setup, network capacity etc.

Here is my usual setup for mysqldump. The parameters below apply for an InnoDB based schema (no MyISAM, Memory tables). Parameters can be specified on the command line, or under the [mysqld] scope in the MySQL configuration file.

mysqldump -u dump_user -p -h db_host --routines --master-data --single-transaction  --skip-add-locks --skip-lock-tables --default-character-set=utf8 --compress my_db

Let's review these parameters and see their effect:

  • -u or --user: This is the user which initiates the dump. Depending on other parameters, the user may need to have quite a few privileges, such as SELECT, RELOAD, FILE, REPLICATION CLIENT etc. Since I do not usually allow for remote root access into mysql, I create a temporary user solely for the purpose of the dump (many times it's a one-time action), for the specific machine from which the dump is run, and provide this user with all necessary permissions.
  • -h or --host: I try not to dump from the same machine on which MySQL is running. If I do, I prefer to dump into a different disk from that on which the data and log files reside. The dump itself may create a heavy load on the machine (setting locks, performing lots of non cached IO operations). Since the target of the dump is mostly to create a backup on another machine, or set up replication on another machine, the dump has better not run from the MySQL machine.
  • --routines: It is really an annoyance to have to remember this flag. In contrast to --triggers, which is by default TRUE, the --routines parameter is by default FALSE, which means if you forget it - you don't get the stored functions and procedures in your schema.
  • --master-data: I always enable binary logs on the MySQL nodes I work on. While binary logs may lead to more IO operations (writing binary logs make for more disk writes, obviously, but also disable some InnoDB optimizations), may consume more disk space (once I've worked with a company which had such a burst of traffic, that the binary logs to completely filled their disk in less than one day). If binary logs are enabled, the --master-data parameter allows for easy replication setup: the dump includes the CHANGE MASTER TO MASTER_LOG_FILE='...', MASTER_LOG_POS=... statement, so no need to do stuff like SHOW MASTER STATUS on the dumped node. Optionally, you can set --master-data=2 to have the statement commented.
  • --single-transaction --skip-add-locks --skip-lock-tables: When working with transactional-only storage engines (InnoDB is the most popular choice, but new engines are coming: Falcon, PBXT, Transactional-Maria, SolidDB and more), these parameters allow for a non-interruptive backup, which does not place read locks on all tables. It is possible to keep on reading and writing to the database while mysqldump is running with single transaction. Running in this mode does have its penalty: more IO operations (due to MVCC's duplication of data while many transactions access the same data for Read/Write). The server is likely to perform more slowly during the dump time.
  • --default-character-set=utf8: I've seen so many MySQL installations in which world-wide textual data was stored in the Latin1 charset than I can remember. Many developers, who are testing using standard English data, are not even aware of the issues arrising from changing the data later on to utf8. But even those who are, are usually unaware of the necessity to configure the character set on a per connection basis, or for their specific clients (JDBC or PHP connectors, etc). mysqldump is no different, and if you have non-latin text in your tables, always remember to set this option.
  • --compress: when dumping to another machine, especially a remote one, using this option to GZIP the data between the MySQL server and the mysqldump client. This will make for more CPU operations, but CPU is usually cheap nowdays, and the compression may well save you hours of network transfer time.
  • George

    Explanations are okay but please define more.

  • @George - these are the primary flags I use.

  • Thanks! Finally a full explanation of all important parameters for a non locking MySQL dump using InnoDB tables!

  • Trent

    Hi, thanks for this I've been search for somthing like this.

    I am having issues dumping a large table which in turns freezes the site until the table is locked. I use single transaction but not skip lock tables which I will try tomorrow.

    Question: using skip lock tables. Will this mean the data between tables can become out of sync? And can updates still occur without freezing the site waiting for response from MySQL while table is being dumped.

  • single-transaction implies skip-lock-tables.
    nothing comes out of sync (we are talking about InnoDB tables, right?), and update can still occur while dump is being made, and no inconsistencies are introduced.
    If your dataset is too big for mysqldump (usually at the whereabouts of a few dozen GB) then you should consider Xtrabackup or LVM snapshot as an alternative.

  • Trent

    Thank you Shiomo,

    My DB is 18 Gb and the main table that causes the freezing issue on the site is around 8 GB

    I ignore myissam table in the dump to ensure all tables are innodb.

    I use in a perl script:

    mysqldump --ignore-table=dbname.table1 -ignore-table=dbname.table2 -ignore-table=dbname.table3 -ignore-table=dbname.table4 --single-transaction --quick dbname > /backup/day_`date +%d`_innodb.sql

    It starts to freeze about half way through the large table dumping. The table takes inserts, updates, and can take some deletes while it's being dumped.

    I will look into alternative as suggested, however if you know of any other reason this may be happeneing I would be very interested.

  • @Trent,
    This is due to InnoDB's MVCC implementation, where a first write to some row makes a version of that row, the second one waiting on the first. If a long SELECT is taking place, the first can never merge in, and the second locks.

  • Trent

    @shiomi OK so I have serious issues then 🙂

    I think I am going to explore a slave solution as the Database is going to continue to grow, and I need to ensure the backups safely happen.

    Thank you for your advise

  • Pingback: Sunny Blog » mysql in case of()

  • Hi,

    thank you for the good info, i.e. compared the mysql documentation. I had forgotten how to set the character set and most people still don't do it. Yours is the first google hit that is not idiotic.

Powered by Wordpress and MySQL. Theme by