Got “too many connections” this morning. New attempts continuously abort. Every once in a while some slipped through, but overall behavior was unacceptable.
max_connections is set to 500, well above normal requirements.
Immediate move: raise max_connections to 600, some urgent connections must take place. But, this is no solution: if 500 got hogged, so will the extra 100 I’ve just made available.
So, who’s to blame? SHOW PROCESSLIST is so unfriendly at that. Wait. Didn’t I create that view in common_schema, called processlist_per_userhost? I wonder what it says…
SELECT * FROM common_schema.processlist_per_userhost; +-------------+------------------+-----------------+------------------+---------------------+ | user | host | count_processes | active_processes | average_active_time | +-------------+------------------+-----------------+------------------+---------------------+ | maatkit | sqlhost02.myweb | 1 | 0 | NULL | | rango | webhost04.myweb | 2 | 0 | NULL | | rango | webhost07.myweb | 8 | 0 | NULL | | rango | sqlhost02.myweb | 38 | 0 | NULL | | rango | management.myweb | 35 | 0 | NULL | | rango | webhost03.myweb | 10 | 0 | NULL | | rango | local01.myweb | 8 | 0 | NULL | | rango | analytic02.myweb | 11 | 0 | NULL | | mytop | localhost | 2 | 0 | NULL | | buttercup | sqlhost02.myweb | 451 | 5 | 0.0000 | | replc_user | sqlhost00.myweb | 1 | 1 | 392713.0000 | | replc_user | sqlhost02.myweb | 1 | 1 | 38028.0000 | | root | localhost | 2 | 0 | NULL | | system user | | 2 | 2 | 196311.5000 | +-------------+------------------+-----------------+------------------+---------------------+
Ah! It’s buttercup connecting from sqlhost02.myweb who is making a fuss. I knew that view was created for a reason.
The is easy enough to solve, some iterative process got hanged, so I just killed it.
But – additional mental note for common_schema: allow killing of processes using user name / host name / combination / regex, instead of gathering the process IDs then killing them one by one.
@Rick,
As do I; but the aforementioned connections were not idle. Due to some bug they continuously pinged the server.