Who is hogging my MySQL connections?

Got “too many connections” this morning. New attempts continuously abort. Every once in a while some slipped through, but overall behavior was unacceptable.

max_connections is set to 500, well above normal requirements.

Immediate move: raise max_connections to 600, some urgent connections must take place. But, this is no solution: if 500 got hogged, so will the extra 100 I’ve just made available.

So, who’s to blame? SHOW PROCESSLIST is so unfriendly at that. Wait. Didn’t I create that view in common_schema, called processlist_per_userhost? I wonder what it says…

SELECT * FROM common_schema.processlist_per_userhost;
+-------------+------------------+-----------------+------------------+---------------------+
| user        | host             | count_processes | active_processes | average_active_time |
+-------------+------------------+-----------------+------------------+---------------------+
| maatkit     | sqlhost02.myweb  |               1 |                0 |                NULL |
| rango       | webhost04.myweb  |               2 |                0 |                NULL |
| rango       | webhost07.myweb  |               8 |                0 |                NULL |
| rango       | sqlhost02.myweb  |              38 |                0 |                NULL |
| rango       | management.myweb |              35 |                0 |                NULL |
| rango       | webhost03.myweb  |              10 |                0 |                NULL |
| rango       | local01.myweb    |               8 |                0 |                NULL |
| rango       | analytic02.myweb |              11 |                0 |                NULL |
| mytop       | localhost        |               2 |                0 |                NULL |
| buttercup   | sqlhost02.myweb  |             451 |                5 |              0.0000 |
| replc_user  | sqlhost00.myweb  |               1 |                1 |         392713.0000 |
| replc_user  | sqlhost02.myweb  |               1 |                1 |          38028.0000 |
| root        | localhost        |               2 |                0 |                NULL |
| system user |                  |               2 |                2 |         196311.5000 |
+-------------+------------------+-----------------+------------------+---------------------+

Ah! It’s buttercup connecting from sqlhost02.myweb who is making a fuss. I knew that view was created for a reason.

The is easy enough to solve, some iterative process got hanged, so I just killed it.

But – additional mental note for common_schema: allow killing of processes using user name / host name / combination / regex, instead of gathering the process IDs then killing them one by one.

11 thoughts on “Who is hogging my MySQL connections?

  1. @Sinisa,
    You’re absolutely right about this. With InnoDB Plugin + I_S plugins enables this is solvable. May integrate this onto common_schema.

    @Justin,
    Very cool. pt-mysql-summary is a very useful tool.

  2. @shlomi, regarding your desire for “killing of processes using user name / host name / combination / regex”, you can probably do that with pt-kill using the –match-user and –match-host options.

  3. I like to keep @@global.wait_timeout low enough so that those connections that fail to disconnected get booted.

    Also, if Apache is involved, I like to keep MaxClients at a reasonable level. Note: You need to have ((#apaches) * MaxClients) < ((#slaves) * max_connections)

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.