{"id":6843,"date":"2014-04-20T07:16:13","date_gmt":"2014-04-20T05:16:13","guid":{"rendered":"http:\/\/code.openark.org\/blog\/?p=6843"},"modified":"2014-04-20T08:14:26","modified_gmt":"2014-04-20T06:14:26","slug":"the-mystery-of-mysql-5-6-excessive-buffer-pool-flushing","status":"publish","type":"post","link":"https:\/\/code.openark.org\/blog\/mysql\/the-mystery-of-mysql-5-6-excessive-buffer-pool-flushing","title":{"rendered":"The mystery of MySQL 5.6 excessive buffer pool flushing"},"content":{"rendered":"<p>I&#8217;m experimenting with upgrading to MySQL <strong>5.6<\/strong> and am experiencing an unexplained increase in disk I\/O utilization. After discussing this with several people I&#8217;m publishing in the hope that someone has an enlightenment on this.<\/p>\n<p>We have a few dozens servers in a normal replication topology. On this particular replication topology we&#8217;ve already evaluated that <strong>STATEMENT<\/strong> based replication is faster than <strong>ROW<\/strong> based replication, and so we use <strong>SBR<\/strong>. We have two different workloads on our slaves, applied by two different HAProxy groups, on three different data centres. Hardware-wise, servers of two groups use either Virident SSD cards or normal SAS spindle disks.<\/p>\n<p>Our servers are I\/O bound. A common query used by both workloads looks up data that does not necessarily have a hotspot, and is very large in volume. DML is low, and we only have a few hundred statements per second executed on master (and propagated through replication).<\/p>\n<p>We have upgraded <strong>6<\/strong> servers from all datacenters to <strong>5.6<\/strong>, both on SSD and spindle disks, and are experiencing the following phenomena:<!--more--><\/p>\n<ul>\n<li>A substantial increase in disk I\/O utilization. See a <strong>10<\/strong> day breakdown (upgrade is visible on <strong>04\/14<\/strong>) this goes on like this many days later:<br \/>\n<blockquote><p><a href=\"http:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-disk-utilization-10-days.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-6845 size-full\" src=\"http:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-disk-utilization-10-days.png\" alt=\"5.5-to-5.6-disk-utilization-10-days\" width=\"700\" height=\"400\" srcset=\"https:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-disk-utilization-10-days.png 700w, https:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-disk-utilization-10-days-300x171.png 300w\" sizes=\"auto, (max-width: 700px) 100vw, 700px\" \/><\/a><\/p><\/blockquote>\n<\/li>\n<\/ul>\n<ul>\n<li>A substantial increase in InnoDB buffer pool pages flush: Mr. Blue is our newly upgraded server; it joins Mr. Green upgraded a couple weeks ago. Mr. Red is still <strong>5.5<\/strong>. This is the only MySQL graph that I could directly relate to the increase in I\/O:<br \/>\n<blockquote><p><a href=\"http:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-rise-in-innodb-buffer-pool-pages-flushed.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-6848\" src=\"http:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-rise-in-innodb-buffer-pool-pages-flushed.png\" alt=\"5.5-to-5.6-rise-in-innodb-buffer-pool-pages-flushed\" width=\"700\" height=\"350\" srcset=\"https:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-rise-in-innodb-buffer-pool-pages-flushed.png 700w, https:\/\/code.openark.org\/blog\/wp-content\/uploads\/2014\/04\/5.5-to-5.6-rise-in-innodb-buffer-pool-pages-flushed-300x150.png 300w\" sizes=\"auto, (max-width: 700px) 100vw, 700px\" \/><\/a><\/p><\/blockquote>\n<\/li>\n<li>No change in workload (it takes some 60 minutes for caches to warm up, so measuring after that time). Same equal share of serving as dictated by HAProxy. Same amount of queries. Same amount of everything.<\/li>\n<li>Faster replication speed, on single thread &#8211; that&#8217;s the good part! We see <strong>30%<\/strong> and more improvement in replication speed. Tested by stopping <strong>SLAVE SQL_THREAD<\/strong> for a number of pre-defined minutes, then measuring time it took for slave to catch up, up to 10 seconds lag. The results vary depending on the time of day and serving workload on slaves, but it is <em>consistently far faster<\/em> with <strong>5.6<\/strong>.<\/li>\n<\/ul>\n<p>The faster replication speed motivates us to continue with the experiment, and is of a significant factor in our decision. However we are concerned about the I\/O utilization and excessive flushing.<\/p>\n<p>The above graphs depict the <strong>5.6<\/strong> status without any configuration changes as compared to <strong>5.5<\/strong>. I took some days to reconfigure the following variables, with no change to the rate of flushed pages (though some changes visible in double-wite buffer writes):<\/p>\n<ul>\n<li>innodb_log_file_size=128M\/2G<\/li>\n<li>innodb_adaptive_flushing:=0\/1<\/li>\n<li>innodb_adaptive_flushing_lwm:=0\/70<\/li>\n<li>innodb_max_dirty_pages_pct := 75\/90<\/li>\n<li>innodb_flush_neighbors:=0\/1<\/li>\n<li>innodb_max_dirty_pages_pct_lwm:=75\/90<\/li>\n<li>innodb_old_blocks_time:=0\/1000<\/li>\n<li>innodb_io_capacity:=50\/100\/200<\/li>\n<li>innodb_io_capacity_max:=50\/100\/1000<\/li>\n<li>relay_log_info_repository:=&#8217;table&#8217;\/&#8217;file&#8217;<\/li>\n<li>master_info_repository:=&#8217;table&#8217;\/&#8217;file&#8217;<\/li>\n<li>default_tmp_storage_engine:=&#8217;myisam&#8217;\/&#8217;innodb&#8217;<\/li>\n<li>eq_range_index_dive_limit:=0\/10<\/li>\n<\/ul>\n<p>And more&#8230; Have done patient one-by-one or combinations of the above where it made sense. As you see I began with the usual suspects and moved on to more esoteric stuff. I concentrated on new variables introduced in <strong>5.6<\/strong>, or ones where the defaults have changed, or ones we have explicitly changed the defaults from.<\/p>\n<p>The above is consistent on all upgraded servers. On SSD the disk utilization is lower, but still concerning.<\/p>\n<p>Our use case is very different from the one <a href=\"http:\/\/yoshinorimatsunobu.blogspot.co.il\/2013\/12\/single-thread-performance-regression-in.html\">presented by Yoshinori Matsunobu<\/a>. and apparently not too many have experienced upgrading to <strong>5.6<\/strong>. I&#8217;m hoping someone might shed some light.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m experimenting with upgrading to MySQL 5.6 and am experiencing an unexplained increase in disk I\/O utilization. After discussing this with several people I&#8217;m publishing in the hope that someone has an enlightenment on this. We have a few dozens servers in a normal replication topology. On this particular replication topology we&#8217;ve already evaluated that [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"enabled":false},"version":2}},"categories":[5],"tags":[14,52,8],"class_list":["post-6843","post","type-post","status-publish","format-standard","hentry","category-mysql","tag-innodb","tag-performance","tag-replication"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p2bZZp-1Mn","_links":{"self":[{"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/posts\/6843","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/comments?post=6843"}],"version-history":[{"count":8,"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/posts\/6843\/revisions"}],"predecessor-version":[{"id":6853,"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/posts\/6843\/revisions\/6853"}],"wp:attachment":[{"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/media?parent=6843"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/categories?post=6843"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/code.openark.org\/blog\/wp-json\/wp\/v2\/tags?post=6843"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}