I’m further developing a general log hook, which can stream queries from the general log.
A particular direction I’m taking is to filter queries by their type of actions. For example, the tool (oak-hook-general-log) can be instructed to only stream out those queries which involve creation of a temporary table; or those which cause for a filesort, or full index scan, etc.
This is done by evaluating of query execution plans on the fly. I suspect the MySQL query analyzer roughly does the same (as a small part of what it does).
It’s almost nothing one cannot do with sed/awk. However, I bumped into a couple of problems:
- The general log (and the mysql.general_log table, in particular) does not indicate the particular database one is using for the query. Since slow log does indicate this data, I filed a bug on this. I currently solve this by crossing connection id with the process list, where the current database is listed. It’s shaky, but mostly works.
- Just realized: there’s no DB info in the EXPLAIN output! It’s weird, since I’ve been EXPLAINing queries for years now. But I’ve always had the advantage of knowing the schema used: either because I was manually executing the query on a known schema, or mk-query-digest was kind enough to let me know.
For example, look at the following imaginary query, involving both the world and sakila databases:
mysql> use test; Database changed mysql> EXPLAIN SELECT * FROM world.Country JOIN sakila.city WHERE Country.Capital = city.city_id; +----+-------------+---------+--------+---------------+---------+---------+-----------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------+--------+---------------+---------+---------+-----------------------+------+-------------+ | 1 | SIMPLE | Country | ALL | NULL | NULL | NULL | NULL | 239 | | | 1 | SIMPLE | city | eq_ref | PRIMARY | PRIMARY | 2 | world.Country.Capital | 1 | Using where | +----+-------------+---------+--------+---------------+---------+---------+-----------------------+------+-------------+ 2 rows in set (0.00 sec)
The query is imaginary, since the tables don’t actually have anything in common. But look at the EXPLAIN result: can you tell where city came from? Country can somehow be parsed from the ref column, but no help on city.
Moreover, table aliases show on the EXPLAIN plan (which is good), but with no reference to the original table.
So, is it back to parsing of the SQL query? I’m lazy reluctant to do that. It’s error prone, and one needs to implement, or use, a good parser, which also accepts MySQL dialect. Haven’t looked into this yet.
I’m currently at a standstill with regard to automated query execution plan evaluation where database cannot be determined.
Exactly. All issues you describe here are really showstoppers for *automated* analysis of the general log. Users analyzing *interactively* may have workarounds of the type you describe.
What about using explain extended? The warning generated is the rewritten query with full database.table.column notation that you can parse with the standard explain output.
@Rob,
This still leaves me with SQL parsing. Not only do I need a good parser, but I will also need to be able to cross-match the parser’s results with the execution plan.
I confess I have not even tried it; but I can think of a couple problems on the way. For example, what’s the alias for a derived table? EXPLAIN uses ‘derived1’ etc. How can I follow the exact same path?
But yes, the result of EXPLAIN EXTENDED if far more verbose than the original query. Thank you.
@Peter,
I see you’ve tried the same path…
Shlomi, Rob has a point. The “canonical” query in the EXPLAIN EXTENDED output is very ugly, but probably quite easy to parse, as all identifiers are quoted, join conditions are fully parenthesized, whitespace is normalized – lot of headeaches gone.
I suspect you could even do it with a hand-crafted operator precedence parser.