Comments on: Upgrading to Barracuda & getting rid of huge ibdata1 file https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file Blog by Shlomi Noach Tue, 28 May 2013 05:12:15 +0000 hourly 1 https://wordpress.org/?v=5.3.3 By: shlomi https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-209893 Tue, 28 May 2013 05:12:15 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-209893 I’ve never set the “innodb_file_format_max” so apparently not necessary. From the docs:

“… If the server creates or opens a table with a “higher” file format, it sets the value of innodb_file_format_max to that format. “

]]>
By: Avi Vainshtein https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-209754 Mon, 27 May 2013 15:25:57 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-209754 Many thanks for the post, it’s very useful.

Is it necessary to set both the innodb_file_format and innodb_file_format_max to Barracuda in the my.cnf ?

best regards,
A.V.

]]>
By: Getting rid of huge ibdata file, no dump required | code.openark.org https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-93080 Tue, 22 May 2012 05:33:22 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-93080 […] Upgrading to Barracuda & getting rid of huge ibdata1 file […]

]]>
By: Ronald Bradford https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30938 Wed, 16 Feb 2011 22:57:14 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30938 Thanks for the post. You reiterated my earlier post about the same topic.

http://ronaldbradford.com/blog/leveraging-the-innodb-plugin-2011-02-11/

]]>
By: shlomi https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30889 Wed, 16 Feb 2011 14:31:45 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30889 Gabi,
I thinks this task is not hard to achieve. It’s a one time job, I’ve done plenty of these before. We can discuss more privately if you like; it’s a bit too lengthy to explain everything here.

]]>
By: Gabi Davis https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30886 Wed, 16 Feb 2011 13:29:52 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30886 Hi shlomi,
Thanks for the timely reply 🙂
Our entire dataset is almost double that, the innodb file is 350G on its own… and although two days of traffic is huge there are definitely ways to overcome the disk issue but in general, it’s sad to realize that this is the only way to go. We were hoping to digg deeper and find something else we could do that wouldn’t require a dump and reload of the entire data.
If we have to go this way we’ll definitely take the opportunity to make some structural changes we had been planning for long and did not run due to the size of the tables and the nature of ‘alter table’, and probably also upgrade to 5.5.
As for the fast index creation – that is definitely great, but there’s also the problem of not being able to add/drop columns (and I have to read more about the plugin to see if there’s anything about that).
Thanks again,
Gabi

]]>
By: shlomi https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30876 Wed, 16 Feb 2011 10:08:23 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30876 @Gabi,

When working with a slave, you can get the benefit of using mk-parallel-dump + restore. I’ve had these boost up export/import time by magnitude of 6x.

Assuming your dataset size is indeed 350GB, I would guess this should take a day or two at the most (pure guess, based on experience with commodity hardware?). Why don’t you try offline and see how much time it takes for export/import, without actually taking down a slave? I mean, export from a (stopped) slave, then import on a local office machine; measure the time for export + import. Then you can see if you have enough room to store the binary logs.

Since I’m assuming your dataset always grows, I should add: the sooner the better.

With regard to altering without rewrite of full table: InnoDB plugin’s barracuda format allows for fast index creation.
Or you could use Facebook’s OSC, or openark kit’s oak-online-alter-table

]]>
By: Gabi Davis https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30867 Wed, 16 Feb 2011 09:37:41 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30867 Hi shlomi,
Good post.
However, when handling large and very busy databases the restore from dump is not an option. The time to dump and load as well as the storage required for keeping binary log files till this entire operation is over is a kill over.
Are there no other ways to do this? we’re carrying on our back a file of 350G and the only option we currently have is to keep carrying it even though we plan to have no tables left in it in the near future.
Also, are there no plans in the mysql world to finally allow altering tables without a rewrite of the full table? that’s a huge downside of mysql as I see it.

]]>
By: shlomi https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30796 Tue, 15 Feb 2011 18:18:44 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30796 @Daniël,
I’m wondering about your NetApp setup. Do you have your different filesystems mapped to different disks on your NetApp?

]]>
By: Daniël van Eeden https://shlomi-noach.github.io/blog/mysql/upgrading-to-barracuda-getting-rid-of-huge-ibdata1-file/comment-page-1#comment-30779 Tue, 15 Feb 2011 13:16:38 +0000 https://shlomi-noach.github.io/blog/?p=3304#comment-30779 @Shlomi

Even InnoDB-only setups do have MyISAM tables for the internal mysql.* tables.

Just doing a dump/restore for InnoDB tables could be nice if there are a few big MyISAM tables.

One of the files you might want to safe is the master.info file if it’s a slave. And it might be nice to safe errorlogs and slow query logs to compare with the situation after the restore.

With NetApp you could use one volume with multiple Qtrees to create atomic snapshots and still use multiple mountpoints.

]]>