Initially I just intended for the tool to be able to dump the general log to standard output, from any machine capable to connect to MySQL. Quick enough, I realized the power it brings.
With this tool, one can dump to standard output all queries using temporary tables; or using a specific index; or doing a full index scan; or just follow up on connections; or... For example, the following execution will only log queries which make for filesort:
oak-hook-general-log --user=root --host=localhost --password=123456 --filter-explain-filesort
The problem with using the standard logs
So you have the general log, which you don't often enable, since it tends to grow huge within moments. You then have the slow log. Slow log is great, and is among the top tools for MySQL diagnosis.
The slow log allows for log-queries-not-using-indexes, which is yet another nice feature. Not only should you log any query running for over X seconds, but also log any query which does not use an index.
Wait. This logs all single-row tables (no single row table will use an index), as well as very small tables (a common 20 rows lookup table will most often be scanned). These are OK scans. This makes for some noise in the slow log.
And how about queries which do use an index, but do so poorly? They use an index, but retrieve some 12,500,000 rows, using temporary table & filesort?
What oak-hook-general-log does for you
This tool streams out the general log, and filters out queries based on their role or on their execution plan.
To work at all, it must enable the general log. Moreover, it directs the general log to log table. Mind that this makes for a performance impact, which is why the tool auto-terminates and restores original log settings (default is 1 minute, configurable). It's really not a tool you should keep running for days. But during the few moments it runs, it will:
- Routinely rotate the mysql.general_log table so that it doesn't fill up
- Examine entries found in the general log
- Cross reference entries to the PROCESSLIST so as to deduce database context (bug #52554)
- If required and appropriate, evaluate a query's execution plan
- Decide whether to dump each entry based on filtering rules
Filtering rules are passed as command line options. At current, only one filtering rule applies (if more than one specified only one is used, so no point in passing more than one). Some of the rules are:
- filter-connection: only log connect/quit entries
- filter-explain-fullscan: only log full table scans
- filter-explain-temporary: only log queries which create implicit temporary tables
- filter-explain-rows-exceed: only log queries where more than X number of rows are being accessed on some table (estimated)
- filter-explain-total-rows-exceed: only log queries where more than X number of rows are accessed on all tables combined (estimated, with possibly incorrect numbers on some queries)
- filter-explain-key: only log queries using a specific index. This feature somewhat overlaps with Maatkit's mk-index-usage (read announcement).
- filter-explain-contains: a general purpose grep on the execution plan. Log queries where the execution plan contains some text.
There are other filters, and I will possibly add more in due time.
Here are a couple cases I used oak-hook-general-log for:
Use case: temporary tables
I have a server with this alarming chart (courtesy mycheckpoint) of temporary tables:
What could possibly create 30 temporary tables per second on average?
The slow log produced nothing helpful, even with log-queries-not-using-indexes enabled. There were a lot of queries not using indexes there, but nothing at these numbers. With:
enabled for 1 minute, nothing came out. Weird. Enabled for 5 minutes, I got one entry. Turned out a scheduled script, acting once per 5 minutes, was making a single complicated query involving many nested views, which accounted for some hundreds of temporary tables created. All of them very small, query time was very fast. There is no temporary tables problem with this server, case closed.
Use case: connections
A server had issues with some exceptions being thrown on the client side. There was a large number of new connections created per second although the client was using a connection pool. Suspecting the pool didn't work well, I issued:
The pool was working well, all right. No entries for that client were recorder in 1 minute of testing. However, it turned out some old script was flooding the MySQL server with requests, every second. The log showed root@somehost, and sure enough, the script was disabled. Exceptions were due to another reason; it was good to eliminate a suspect.
Some of the tool's use case is relatively easy to solve with tail, grep & awk; others are not. I am using it more and more often, and find it to make significant shortcuts in tracking down queries.
Download the tool as part of openark kit: access the openark kit project page.
Or get the source code directly.
Feedback is most welcome.