Reasons to use innodb_file_per_table

When working with InnoDB, you have two ways for managing the tablespace storage:

  1. Throw everything in one big file (optionally split).
  2. Have one file per table.

I will discuss the advantages and disadvantages of the two options, and will strive to convince that innodb_file_per_table is preferable.

A single tablespace

Having everything in one big file means all tables and indexes, from all schemes, are ‘mixed’ together in that file.

This allows for the following nice property: free space can be shared between different tables and different schemes. Thus, if I purge many rows from my log table, the now unused space can be occupied by new rows of any other table.

This same nice property also translates to a not so nice one: data can be greatly fragmented across the tablespace.

An annoying property of InnoDB’s tablespaces is that they never shrink. So after purging those rows from the log table, the tablespace file (usually ibdata1) still keeps the same storage. It does not release storage to the file system.

I’ve seen more than once how certain tables are left unwatched, growing until disk space reaches 90% and SMS notifications start beeping all around.

There’s little to do in this case. Well, one can always purge the rows. Sure, the space would be reused by InnoDB. But having a file which consumes some 80-90% of disk space is a performance catastrophe. It means the disk needle needs to move large distances. Overall disk performance runs very low.

The best way to solve this is to setup a new slave (after purging of the rows), and dump the data into that slave.

InnoDB Hot Backup

The funny thing is, the ibbackup utility will copy the tablespace file as it is. If it was 120GB, of which only 30GB are used, you still get a 120GB backed up and restored.

mysqldump, mk-parallel-dump

mysqldump would be your best choice if you only had the original machine to work with. Assuming you’re only using InnoDB, a dump with –single-transaction will do the job. Or you can utilize mk-parallel-dump to speed things up (depending on your dump method and accessibility needs, mind the locking).

innodb_file_per_table

With this parameter set, a .ibd file is created per table. What we get is this:

  • Tablespace is not shared among different tables, and certainly not among different schemes.
  • Each file is considered a tablespace of its own.
  • Again, tablespace never reduces in size.
  • It is possible to regain space per tablespace.

Wait. The last two seem conflicting, don’t they? Let’s explain.

In our log table example, we purge many rows (up to 90GB of data is removed). The .ibd file does not shrink. But we can do:

ALTER TABLE log ENGINE=InnoDB

What will happen is that a new, temporary file is created, into which the table is rebuilt. Only existing data is added to the new table. Once comlete, the original table is removed, and the new table renamed as the original table.

Sure, this takes a long time, during which the table is completely locked: no writes and no reads allowed. But still – it allows us to regain disk space.

With the new InnoDB plugin, disk space is also regained when execuing a TRUNCATE TABLE log statement.

Fragmentation is not as bad as in a single tablespace: the data is limited within the boundaries of a smaller file.

Monitoring

One other nice thing about innodb_file_per_table is that it is possible to monitor table size on the file system level. You don’t need access to MySQL, to use SHOW TABLE STATUS or to query the INFORMATION_SCHEMA. You can just look up the top 10 largest files under your MySQL data directory (and subdirectories), and monitor their size. You can see which table grows fastest.

Backup

Last, it is not yet possible to backup single InnoDB tables by copying the .ibd files. But hopefully work will be done in this direction.

37 thoughts on “Reasons to use innodb_file_per_table

  1. Your posts forgets to mention some very important reasons why innodb_file_per_table is bad.

    The most critical of this, is the necessary fsync, that if running one per second now has to occur on ‘n’ opened tables, rather then a single file.

    A Disk I/O bound system is the most common resource bottleneck of a popular system, minimizing unnecessary accesses to your slowest physical resource should be a priority.

  2. One of the biggest headaches with InnoDB is indeed the monolithic ibdata1 file. To reduce the size of the file and leave nothing but InnoDB metadata is the following:

    1. Run SELECT DISTINCT table_schema FROM information_schema.tables where engine=’InnoDB’;
    db-1
    db-2

    db-n
    2. Run mysqldump of only those databases.
    mysqldump -h… -u… -p… –routines –triggers –databases db1 db2 … dbn > InnoDBData.sql

    Note: If there are MyISAM tables in the dump, no problem. They will get put back when reloading. The dump file will also contains the CREATE TABLE command for db1, db2, …, dbn.

    3. Drop those databases
    DROP DATABASE db1;
    DROP DATABASE db2;

    DROP DATABASE dbn;

    4. Run ‘service mysql stop’

    5. Delete InnoDB files
    rm -f /var/lib/mysql/ibdata1
    rm -f /var/lib/mysql/ib_logfile0
    rm -f /var/lib/mysql/ib_logfile1

    6. Add innodb_file_per_table to /etc/my.cnf

    7. Make sure ibdata1 setting is 10M:autoextend

    8. Run ‘service mysql start’
    This rebuilds ibdata1 and the ib log files
    ibdata1 is now 10MB

    9. Run ‘source InnoDBData.sql’ in mysql
    This will reload the InnoDB data

    Now ibdata1 is defragged (in a convoluted way)

    Going forward, run ‘OPTIMIZE TABLE’ on all InnoDB tables periodically to defragment the .ibd files. ibdata1 will only contain InnoDB metadata.

  3. @Rolando,

    Sure, if you can take your server down for that, then life is good.
    Also, if you can allow for periodic OPTIMIZE TABLE, life is good, again.
    Using master-master replication may help out on this, and shorten your downtime.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.