SQL: selecting top N records per group

A while back I presented(*) an SQL trick to present with non-aggregated column on a GROUP BY query, without use of subquery or derived tables.

Based on a similar concept, combined with string walking, I now present a query which selects top-n records for each group, ordered by some condition. It will require no subqueries. It executes faster than its more conventional alternatives.

[UPDATE: this is MySQL only. Others can use Window Functions where available]

Using the simple world database, we answer the following question:

What are the top 5 largest (by area) countries for each continent? What are their names, surface area and population?

Similar questions would be:

What were the latest 5 films rented by each customer?

What were the most presented advertisements for each user?

etc.

Step 1: getting the top

We already know how to get a single column’s value for the top country, as presented in the aforementioned post: Continue reading » “SQL: selecting top N records per group”

Speaking at the O’Reilly MySQL Conference 2011

I’m very pleased and humbled to announce that my submission to the upcoming O’Reilly MySQL Conference, April 2011, has been accepted.

I will present a 45 minute session titled openark-kit: MySQL utilities for everyday use.

In this session, I will present some of the tools in the openark kit. We’ll discuss some limitations of the MySQL server, and how openark kit tools overcome those limitations and provide with solutions to common maintenance and audit problems.

This will be a technical session and will discuss various topics of the MySQL server: security, execution plans, replication, triggers and more. I do not intend to discuss all tools, nor to cover the various options. Instead, I’ll present the “behind the scenes“, show why the tools work, present common problems and typical use case.

This will be the first time I present at the MySQL Conference (or any conference outside Israel, for that matter). I hope to have a good session. As extra measure of safety, I’ll bring along a couple basketballs; if the sun shines, we can all go outside and have a good time!

The idea to submit this talk (credit Roland Bouman) has given me the inspiration to put effort in making a new release with new and updated tools. So this talk is already a success as far as I’m concerned.

Hope to see you there!

[PS shameless plug: openark kit.]

oak-hook-general-log: your poor man’s Query Analyzer

The latest release of openark kit introduces oak-hook-general-log, a handy tool which allows for some analysis of executing queries.

Initially I just intended for the tool to be able to dump the general log to standard output, from any machine capable to connect to MySQL. Quick enough, I realized the power it brings.

With this tool, one can dump to standard output all queries using temporary tables; or using a specific index; or doing a full index scan; or just follow up on connections; or… For example, the following execution will only log queries which make for filesort:

oak-hook-general-log --user=root --host=localhost --password=123456 --filter-explain-filesort

The problem with using the standard logs

So you have the general log, which you don’t often enable, since it tends to grow huge within moments. You then have the slow log. Slow log is great, and is among the top tools for MySQL diagnosis.

The slow log allows for log-queries-not-using-indexes, which is yet another nice feature. Not only should you log any query running for over X seconds, but also log any query which does not use an index.

Wait. This logs all single-row tables (no single row table will use an index), as well as very small tables (a common 20 rows lookup table will most often be scanned). These are OK scans. This makes for some noise in the slow log.

And how about queries which do use an index, but do so poorly? They use an index, but retrieve some 12,500,000 rows, using temporary table & filesort?

What oak-hook-general-log does for you

This tool streams out the general log, and filters out queries based on their role or on their execution plan.

To work at all, it must enable the general log. Moreover, it directs the general log to log table. Mind that this makes for a performance impact, which is why the tool auto-terminates and restores original log settings (default is 1 minute, configurable). It’s really not a tool you should keep running for days. But during the few moments it runs, it will:

  • Routinely rotate the mysql.general_log table so that it doesn’t fill up
  • Examine entries found in the general log
  • Cross reference entries to the PROCESSLIST so as to deduce database context (bug #52554)
  • If required and appropriate, evaluate a query’s execution plan
  • Decide whether to dump each entry based on filtering rules

Filtering rules

Filtering rules are passed as command line options. At current, only one filtering rule applies (if more than one specified only one is used, so no point in passing more than one). Some of the rules are: Continue reading » “oak-hook-general-log: your poor man’s Query Analyzer”

openark-kit (rev. 170): new tools, new functionality

I’m pleased to announce a new release of the openark kit. There’s a lot of new functionality inside; following is a brief overview.

The openark kit is a set of utilities for MySQL. They solve everyday maintenance tasks, which may be complicated or time consuming to work by hand.

It’s been a while since the last announced release. Most of my attention was on mycheckpoint, building new features, writing documentation etc. However my own use of openark kit has only increased in the past few months, and there’s new useful solutions to common problems that have been developed.

I’ve used and improved many tools over this time, but doing the final cut, along with proper documentation, took some time. Anyway, here are the highlights:

New tool: oak-hook-general-log

oak-hook-general-log hooks up a MySQL server and dumps the general log based on filtering rules, applying to query role or execution plan. It is possible to only dump connect/disconnect entries, queries which make a full table scan, or use temporary tables, or scan more than X number of rows, or…

I’ll write more on this tool shortly.

New tool: oak-prepare-shutdown

This tool makes for an orderly and faster shutdown by safely stopping replication, and flushing InnoDB pages to disk prior to shutting down (keeping server available for connections even while attempting to flush dirty pages to disk). A typical use case would be:

oak-prepare-shutdown --user=root --ask-pass --socket=/tmp/mysql.sock && /etc/init.d/mysql stop

New tool: oak-repeat query

oak-repeat-query repeats executing a given query until some condition holds. The condition can be:

  • Number of given iterations has been reached
  • Given time has elapsed
  • No rows have been affected by query

The tool comes in handy for cleanup jobs, warming up caches, etc. Continue reading » “openark-kit (rev. 170): new tools, new functionality”

Where’s my cnf file?

So you have a running MySQL server, it’s working well and everyone’s happy. You want to make a minor change to the configuration file, so you edit the file, restart MySQL – but the change doesn’t catch!

Or maybe you want to check that some global variable has not been dynamically changed without an update to the configuration file. But the configuration file doesn’t make any sense — it looks like nothing is common between the file and the server.

Wait, which my.cnf file does MySQL read? Rather, which my.cnf files?

Ever happened to you? If you’re well organized, and only keep a single /etc/my.cnf file, you know exactly where everything is. But some systems are messier, with lots of configuration files hanging around. Which ones apply?

Let’s find out which configuration files apply.

No direct information

It would all be easier if we could just SHOW GLOBAL VARIABLES LIKE ‘configuration_files_that_this_server_has_read_list’. There isn’t such an option.

Continue reading » “Where’s my cnf file?”

Guide to MySQL installation files

Even for DBAs already familiar with MySQL, the choice of installation methods and the variety of install/package files is overwhelming and confusing.

I’ll make a (very!) brief introduction to the various installation options, concentrating on the Linux operating system, and provide with a simple shopping list.

For illustration, I’ll refer to the MySQL 5.1.52 community edition, the latest at the time of this writing. Downloads are available at http://dev.mysql.com/downloads/mysql/5.1.html.

I will then refer to alternate distributions.

A note for Windows users

You are lucky: your choice is very clear. Download the Windows MSI package. Choose 32 or 64 bit architecture according to your OS. If you know about the other ways to install and setup MySQL under Windows, you don’t need this post.

Linux: repositories

Easiest way to install MySQL on linux would be to use your distro’s repository. Just:

sudo apt-get install mysql-server-5.1

or

sudo yum install MySQL-Server-5.1

Your distro should resolve any package dependencies.

I’m in the opinion that if MySQL is the main application to be used on a server, distribution’s default repositories are not the way to go. Reasons include MySQL outdated version, incapability of installing multiple instance, danger of automatic upgrades or downgrades. Read this post and discussion that follows for more.

Linux: packages

If you’re a RedHat/CentOS/SuSE user, you’re in luck: MySQL provides pre-built RPM packages for your system. I vaguely remember MySQL announcing that Ubuntu is to be supported. That was a couple years ago; there is still no package for Debian/Ubuntu.

Continue reading » “Guide to MySQL installation files”

An argument for using mysqldump

I fully agree with Morgan’s An argument for not using mysqldump. This post does not come to contradict it, but rather shed a positive light on mysqldump.

I usually prefer an LVM snapshot based backup, or using XtraBackup. And, with databases as large as dozens of GB and above, I consider mysqldump to be a poor alternative. Poor in runtime, poor in overhead while taking the backup.

However once in a while I get to be reminded that mysqldump just works.

As a recent example, I had a server which was killed after an ALTER TABLE statement hanged forever (table already ALTERed, but old scheme never dropped). The old table data still hanged around the file system, but was not recognized by InnoDB. Trying out DISCARD TABLESPACE did not do the job, and eventually file was dropped.

So far, reasonable. InnoDB would complain about some table it never recognized in the first place, but all would work. That is, until backup was concerned. With innobackup or XtraBackup the restore would fail on some internal problem. LVM would work, but would only copy+paste the problem: innobackup would never again be able to be used on this database. Continue reading » “An argument for using mysqldump”

mycheckpoint (rev 208): aggregation tables, enhanced charting, RPM distribution

Revision 208 of mycheckpoint, a MySQL monitoring solution, has been released. New and updated in this revision:

  • Aggregation tables: aggregated data makes for fast reports on previously slow queries.
  • Enhanced charting: interactive charts now present time stamps dynamically (see demo); “Zoom in” charts are available (see demo) on mycheckpoint‘s HTTP server.
  • RPM distribution: a “noarch” RPM mycheckpoint build is now available.
  • Initial work on formalizing test environment

mycheckpoint celebrates one year of existence!

Aggregation tables

I really wanted to avoid using these: everything was so more beautiful with one single dataset and dozens of supporting views (OK, the views themselves are hardly “beautiful”).

However it was impossible (for my level of expertise) to optimize query performance what with all those views on per-hour and per-day aggregation. The GROUP BYs and the JOINs did not make it possible for condition pushdown (i.e. using MERGE algorithm) where desired.

As result, mycheckpoint now manages aggregation tables: per-hour and per-day. The impact on sample taking is neglect able (making for two additional fast queries), but the impact on reading aggregated data is overwhelming. Generating a HTML full report could take a few minutes to complete. It now returns in no time. This makes charting more attractive, and allows for enhanced charting, such as zooming in on charts, as described following.

Aggregation tables will automatically be created and retroactively populated upon using revision 208. There’s nothing special to do; be advised that for one single execution of mycheckpoint, many INSERT queries are going to be executed. Shouldn’t take more than a couple minutes on commodity hardware and a few months of history.

It is possible to disable aggregation tables, or make for a complete rebuild of tables; by default, though, aggregation is ON.

Enhanced charting

Two enhancements here: Continue reading » “mycheckpoint (rev 208): aggregation tables, enhanced charting, RPM distribution”

MySQL terminology: processes, threads & connections

There’s some confusion in the MySQL terminology for processes, threads & connections, which I will try to resolve. I can’t explain the full nature of what processes and threads are; please see Wikipedia [1] [2] for that. But here’s some basics with regard to MySQL:

  • MySQL server is a single process application.
  • It is multithreaded.
  • It (usually) acts as a TCP/IP server, accepting connections.
  • Each connection gets a dedicated thread.
  • These threads are sometimes named processes, and sometimes they’re referred to as connections.

The last part is where confusion arises, so let me discuss again the use of threads and connections in MySQL.

Continue reading » “MySQL terminology: processes, threads & connections”

Simple guideline for choosing appropriate InnoDB PRIMARY KEYs

Risking some flames, I’d like to suggest only two options for choosing PRIMARY KEYs for InnoDB tables. I suggest they should cover 99% (throwing numbers around) of cases.

PRIMARY KEY cases

  1. An integer (SMALLINT / INT / BIGINT), possibly AUTO_INCREMENT column.
  2. The combination of two columns on a many-to-many connecting table (e.g. film_actor, which connects films to actors), the two columns being the PRIMARY KEYs of respective data tables. This rule may be extended to 3-way relation tables.

A short recap: an InnoDB must have a PRIMARY KEY. It will pick one if you don’t offer it. It can pick a really bad UNIQUE KEY (e.g. website_url(255)) or make one up using InnoDB internal row ids. If you don’t have a good candidate, an AUTO_INCREMENT PRIMARY KEY is probably the easiest way out.

A 2-column combination for a many-to-many connection table is common and viable. The PRIMARY KEY will not only provide with good join access method, but will also provide with the required UNIQUE constraint.

An integer-based PRIMARY KEY will make for more compact & shallow index tree structures, which leads to less I/O and page reads.

An AUTO_INCREMENT will allow for ascending PRIMARY KEY order of INSERT, which is InnoDB-friendly: index pages will be more utilized, less fragmented. Continue reading » “Simple guideline for choosing appropriate InnoDB PRIMARY KEYs”