How to calculate a good InnoDB log file size - recap

October 20, 2009

Following Baron Schwartz' post: How to calculate a good InnoDB log file size, which shows how to make an estimate for the InnoDB log file size, and based on SQL: querying for status difference over time, I've written a query to run on MySQL 5.1, which, upon sampling 60 seconds of status, estimates the InnoDB transaction log bytes that are expected to be written in the period of 1 hour.

Recap: this information can be useful if you're looking for a good innodb_log_file_size value, such that will not pose too much I/O (smaller values will make for more frequent flushes), not will make for a too long recovery time (larger values mean more transactions to recover upon crash).

It is assumed that the 60 seconds period represents an average system load, not some activity spike period. Edit the sleep time and factors as you will to sample longer or shorter periods.

    AS estimated_innodb_os_log_written_per_hour,
  CONCAT(ROUND(innodb_os_log_written_per_minute*60/1024/1024, 1), 'MB')
    AS estimated_innodb_os_log_written_per_hour_mb
  (SELECT SUM(value) AS innodb_os_log_written_per_minute FROM (
      WHERE VARIABLE_NAME = 'innodb_os_log_written'
      WHERE VARIABLE_NAME = 'innodb_os_log_written'
  ) s1
) s2

Sample output:

| estimated_innodb_os_log_written_per_hour | estimated_innodb_os_log_written_per_hour_mb |
|                                584171520 | 557.1MB                                     |
  • Very cool Shlomi!
    Would be great to do this in 5.0 also, but that might be a shell thing since you can't get status within a query.

  • Hi Arjen,

    Thanks. I have an awk 'pager' for that, but it's really ugly...

  • sonny

    GLOBAL_STATUS table is not exist in INFORMATION_SCHEMA database. using 5.1. Am I mising something...

    Pls help.

  • My best guess is that you're NOT using 5.1. Perhaps just the 5.1 client.

    mysql> SELECT VERSION();

  • George

    Thanks for the snippet. Very handy.

    What would you do if you got this result though? Seems a 36G log file might be a bad thing to have.

    | estimated_innodb_os_log_written_per_hour | estimated_innodb_os_log_written_per_hour_mb |
    | 38325872640 | 36550.4MB |

  • @George,
    I suppose I would have to live with it 😀
    Seriously, Percona Server supports > 4G worth of transaction log files.
    Perhaps you can tune you app so as to produce less transactions; i.e. group more statements into transactions (may not reduce overall size, but will help out with flushes), work out smarter queries, take out logging from the database...

  • That's nice for instantaneous (one minute's worth), but you might be in the middle of a spike or a lull. Instead, I use this (in newer versions of MySQL):

    Innodb_os_log_written / (Uptime / 3600) / innodb_log_files_in_group

    That gives a value for the log file size, assuming that rolling over once an hour is a good target.

    I question the need for log files bigger than 4GB. I checked 213 machines here, one machine "wants" 5GB, it was the only one over 3GB.

  • McAuley

    Yeah, when I get a result like this:
    9464024862720 9025597.4MB
    it makes me question the accuracy of this kind of calc. I think looking at the "rollover" of the log files in relation to their size works better for me.

  • McAuley

    @Rick James, yes .. I tested your calc on several of our servers and it looks to be a lot more reasonable. Both the "uptime" and the "Innodb_os_log_written" are now part of the INNODB GLOBAL STATUS. My tests using your formula averaged about 42Mb, instead of the 902+ GIG that this other formula rendered.

  • Rick James

    I attended Oracle's MySQL Connect last weekend. One tidbit I picked up about 5.6 is...
    There is less need now to avoid extra-large log files. Apparently "recover" has been made more efficient. Also, log files can now be bigger than 4GB.

    More details:

    No need to every have more than 2 log_files_in_group.

  • Pingback: » Blog Archive » Working with MySQL on SSD()

Powered by Wordpress and MySQL. Theme by