Some of this is old stuff, but more people are now converting to InnoDB plugin, so as to enjoy table compression, performance boosts. Same holds for people converting to Percona’s XtraDB. InnoDB plugin requires innodb_file_per_table. No more shared tablespace file.
So your ibdata1 file is some 150GB, and it won’t reduce. Really, it won’t reduce. You set innodb_file_per_table=1, do ALTER TABLE t ENGINE=InnoDB (optionally ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8), and you get all your tables in file-per-table .ibd files.
But the original ibdata1 file is still there. It has to be there, don’t delete it! It contains more than your old data.
InnoDB tablespace files never reduce in size, it’s an old-time annoyance. The only way to go round it, if you need the space, is to completely drop them and start afresh. That’s one of the things so nice about file-per-table: an ALTER TABLE actually creates a new tablespace file and drops the original one.
The procedure
The procedure is somewhat painful:
- Dump everything logically (either use mysqldump, mk-parallel-dump, or do it your own way)
- Erase your data (literally, delete everything under your datadir)
- Generate a new empty database
- Load your dumped data.
Using replication
Replication makes this less painful. Set up a slave, have it follow up on the master.
- Stop your slave.
- Make sure to backup the replication position (e.g. write SHOW SLAVE STATUS on a safe location, or copy master.info file).
- Work out the dump-erase-generate-load steps on the slave.
- Reattach the slave to the master using saved data.
For this to succeed you must keep enough binary logs on the master for the entire dump-load period, which could be lengthy.
Upgrading to barracuda
If you wish to upgrade your InnoDB tables to Barracuda format, my advice is this:
- Follow the steps above to generate a file-per-table working slave
- Stop the slave
- Configure skip_slave_start
- Restart MySQL
- One by one do the ALTER TABLE into Barracuda format (ROW_FORMAT=COMPACT or ROW_FORMAT=COMPRESSED)
Note that if you’re about to do table compression, the ALTER statements become considerably slower the better the compression is.
If your dataset is very large, and you can’t keep so many binary logs, you may wish to break step 5 above into:
- ALTER a large table
- Restart MySQL
- Start slave, wait for it to catch up
- Restart MySQL again
and do the same for all large tables.
Why all these restarts?
I’ve been upgrading to Barracuda for a long time now. I have clearly noticed that ALTER into a COMPRESSED format works considerably slower after the slave has done some “real work”. This in particular relates to the last “renaming table” stage. There was a bug with earlier InnoDB plugin versions which made this stage hang. It was solved. But it still takes some time for this last, weird stage, where the new replacement table is complete, and it’s actually been renamed in place of the old table, and the old table renamed into something like “#sql-12345.ibd”, and all that needs to be done is have it dropped, and… Well, it takes time.
My observation is it works faster on a freshly started server. Which is why I take the bother to restart MySQL before each large table conversion.
I’ve never set the “innodb_file_format_max” so apparently not necessary. From the docs:
“… If the server creates or opens a table with a “higher” file format, it sets the value of innodb_file_format_max to that format. “