How common_schema split()s tables – internals

This post exposes some of the internals, and the SQL behind QueryScript’s split. common_schema/QueryScript 1.1 introduces the split statement, which auto-breaks a “large” query (one which operates on large tables as a whole or without keys) into smaller queries, and executes them in sequence.

This makes for easier transactions, less locks held, potentially (depending on the user) more idle time released back to the database. split has similar concepts to oak-chunk-update and pt-archiver, but works differently, and implemented entirely in SQL on server side.

Take the following statement as example:

split (UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR)
  pass;

It yields with (roughly) the following statements:

UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '1')) OR ((`inventory`.`inventory_id` = '1'))) AND (((`inventory`.`inventory_id` < '1000')) OR ((`inventory`.`inventory_id` = '1000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '1000'))) AND (((`inventory`.`inventory_id` < '2000')) OR ((`inventory`.`inventory_id` = '2000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '2000'))) AND (((`inventory`.`inventory_id` < '3000')) OR ((`inventory`.`inventory_id` = '3000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '3000'))) AND (((`inventory`.`inventory_id` < '4000')) OR ((`inventory`.`inventory_id` = '4000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '4000'))) AND (((`inventory`.`inventory_id` < '4581')) OR ((`inventory`.`inventory_id` = '4581'))));

(I say “roughly” because internally there are user defined variables at play, but for convenience, I verbose the actual values as constants.)

How does that work?

common_schema works on server side. There is no Perl script or anything. It must therefore use server-side operations to:

  • Identify table to be split
  • Analyze the table in the first place, deciding how to split it
  • Analyze the query, deciding on how to rewrite it
  • Split the table (logically) into unique and distinct chunks
  • Work out the query on each such chunk

Following is an internal look at how common_schema does all the above. Continue reading » “How common_schema split()s tables – internals”

Table split(…) for the masses

(pun intended)

common_schema‘s new split statement (see release announcement) auto-splits complex queries over large tables into smaller ones: instead of issuing one huge query, split breaks one’s query into smaller queries, each working on a different set of rows (a chunk).

Thus, it is possible to avoid holding locks for long times, allowing for smaller transactions. It also makes for breathing space for the RDBMS, at times boosting operation speed, and at times prolonging operation speed at will.

In this post I show how split exposes itself to the user, should the user wish so.

split can manage queries of the following forms:

  • DELETE FROM table_name [WHERE]…
  • DELETE FROM table_name USING <multi table syntax> [WHERE]…
  • UPDATE table_name SET … [WHERE]…
  • UPDATE <multiple tables> SET … [WHERE]…
  • INSERT INTO some_table SELECT … FROM <single or multiple tables> [WHERE]…
  • REPLACE INTO some_table SELECT … FROM <single or multiple tables> [WHERE]…
  • SELECT … FROM <multiple tables> [WHERE]…

The latter being a non-obvious one at first sight.

Basically, it’ automatic

You just say:

split (UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR)
  throttle 2;

And split identifies sakila.inventory as the table which needs to be split, and injects appropriate conditions so as to work on a subset of the rows, in multiple steps.

By the way, here’s how to execute a QueryScript code like the above. Continue reading » “Table split(…) for the masses”

common_schema 1.1 released: split(), try-catch, killall(), profiling

I’m very happy to announce the release of common_schema, version 1.1 (revision 300).

This version boasts with compelling new features: innovative QueryScript syntax, libraries, views which add to your skills as a DBA, making some maintenance and management tasks a breeze.

  • QueryScript, split statement: automagically break long queries into smaller chunks, avoid long locks and reduce query/transaction overhead
  • QueryScript, try-catch statement: just try { something; } catch { act_on_error; }.
  • killall(): quickly kill connections based on grantee/user/host information.
  • profiling/profiling_last: utility views to assist in query profiling diagnostics
  • 1 size fits all: a single installer which auto-recognizes available server features and enables respective common_schema features accordingly.
  • QueryScript performance boost
  • much much more…

Not familiar with common_schema? It allows you to do stuff on server side, by selecting from views, calling upon useful routines or writing easy-to-manage scripts.

I’m suggesting that common_schema should be a really-should-have tool to accompany your MySQL install. Did I say “tool”? It’s merely a schema. But it makes for a great framework:

In High Performance MySQL, 3rd edition, Baron Schwartz describes common_schema:

The common_schema is to MySQL as jQuery is to javaScript

Reviewing highlights for version 1.1:

QueryScript

QueryScript is a scripting language. It sees some major improvements here. I’ve made some speed boosts by avoiding using temporary tables, and by using string parsing instead.

Without doubt the two most handy statements added to QueryScript are: Continue reading » “common_schema 1.1 released: split(), try-catch, killall(), profiling”

SQL: selecting top N records per group, another solution

A while back I presented SQL: selecting top N records per group, a “give me the top 5 countries in each continent” type of query, and which used an external numbers table and a lot of tedious casting.

Here’s another solution I came up with (*). Still using GROUP_CONCAT (how else?), but no external table and no casting. The query outputs the largest 5 countries (by surface area) per continent.

SELECT
  Continent,
  Name,
  SurfaceArea,
  Population
FROM
  world.Country,
  (
    SELECT 
      GROUP_CONCAT(top_codes_per_group) AS top_codes
    FROM
      (
        SELECT 
          SUBSTRING_INDEX(GROUP_CONCAT(Code ORDER BY SurfaceArea DESC), ',', 5) AS top_codes_per_group
        FROM
          world.Country
        GROUP BY
          Continent
      ) s_top_codes_per_group
  ) s_top_codes
WHERE
  FIND_IN_SET(Code, top_codes)
ORDER BY
  Continent,
  SurfaceArea DESC
;

+---------------+----------------------------------------------+-------------+------------+
| Continent     | Name                                         | SurfaceArea | Population |
+---------------+----------------------------------------------+-------------+------------+
| Asia          | China                                        |  9572900.00 | 1277558000 |
| Asia          | India                                        |  3287263.00 | 1013662000 |
| Asia          | Kazakstan                                    |  2724900.00 |   16223000 |
| Asia          | Saudi Arabia                                 |  2149690.00 |   21607000 |
| Asia          | Indonesia                                    |  1904569.00 |  212107000 |
| Europe        | Russian Federation                           | 17075400.00 |  146934000 |
| Europe        | Ukraine                                      |   603700.00 |   50456000 |
| Europe        | France                                       |   551500.00 |   59225700 |
| Europe        | Spain                                        |   505992.00 |   39441700 |
| Europe        | Sweden                                       |   449964.00 |    8861400 |
| North America | Canada                                       |  9970610.00 |   31147000 |
| North America | United States                                |  9363520.00 |  278357000 |
| North America | Greenland                                    |  2166090.00 |      56000 |
| North America | Mexico                                       |  1958201.00 |   98881000 |
| North America | Nicaragua                                    |   130000.00 |    5074000 |
| Africa        | Sudan                                        |  2505813.00 |   29490000 |
| Africa        | Algeria                                      |  2381741.00 |   31471000 |
| Africa        | Congo, The Democratic Republic of the        |  2344858.00 |   51654000 |
| Africa        | Libyan Arab Jamahiriya                       |  1759540.00 |    5605000 |
| Africa        | Chad                                         |  1284000.00 |    7651000 |
| Oceania       | Australia                                    |  7741220.00 |   18886000 |
| Oceania       | Papua New Guinea                             |   462840.00 |    4807000 |
| Oceania       | New Zealand                                  |   270534.00 |    3862000 |
| Oceania       | Solomon Islands                              |    28896.00 |     444000 |
| Oceania       | New Caledonia                                |    18575.00 |     214000 |
| Antarctica    | Antarctica                                   | 13120000.00 |          0 |
| Antarctica    | French Southern territories                  |     7780.00 |          0 |
| Antarctica    | South Georgia and the South Sandwich Islands |     3903.00 |          0 |
| Antarctica    | Heard Island and McDonald Islands            |      359.00 |          0 |
| Antarctica    | Bouvet Island                                |       59.00 |          0 |
| South America | Brazil                                       |  8547403.00 |  170115000 |
| South America | Argentina                                    |  2780400.00 |   37032000 |
| South America | Peru                                         |  1285216.00 |   25662000 |
| South America | Colombia                                     |  1138914.00 |   42321000 |
| South America | Bolivia                                      |  1098581.00 |    8329000 |
+---------------+----------------------------------------------+-------------+------------+

In bold are the conditions by which we nominate our selected rows (condition is SurfaceArea DESC, number of rows is 5, so 5 largest countries).

Continue reading » “SQL: selecting top N records per group, another solution”

Ways to export MySQL result set to file on client side

Problem: you wish to write table data to file, but you wish to do so on client side.

SELECT … INTO OUTFILE writes the file on server. What are your options on client?

1. mysql client

If you have direct access from your client machine to your DB server machine, and can connect via mysql client, you get a very customizable file write:

bash$ mysql -h db_host --execute "SELECT * FROM my_table" > /tmp/output.txt

The above writes fancy tables, so you probably want to:

bash$ mysql -h db_host --execute "SELECT * FROM my_table" --bat > /tmp/output.txt

Also try:

bash$ mysql -h db_host --execute "SELECT * FROM my_table" --silent --raw > /tmp/output.txt

To turn off headers.

More options include the –xml or –html formats. Continue reading » “Ways to export MySQL result set to file on client side”

Things that can’t (and some that can) be done from within a MySQL stored routine

I’m doing a lot of stored routine programming lately, working on common_schema. I’m in particular touching at the extremes of abilities. Some things just can’t be done from within a stored routine. Here’s a list of can’t be done:

  • Cursor for SHOW statements: can’t be done — this is explicitly blocked from operating (it once used to work).
  • Get detailed error information on exceptions: apparently 5.6 has support for this. 5.1 and 5.5 do not.
  • Change binlog_format: this is obvious, if you think about it. binlog_format dictates how the routine itself is replicated in the first place.
  • Set sql_log_bin. Again, this makes sense.
  • Work out different results depending on current machine. For example, you can’t have a routine that returns with ‘master’ on the master and with ‘slave’ on the slave. That is, not under any condition. Consider: if Row Based Replication is used, you don’t actually have a routine executing on the slave. I’m happy to be proven wrong on this.
  • Know what database was in use by calling code. The routine executes within the context of the database where it is defined. But you can’t tell what database was in use just a couple milliseconds before.
  • Likewise, know what sql_mode was in use by calling code. Stored routines have their own sql_mode – the one they were created with. No way to check up on the calling stack.
  • And you can’t USE another database (database as in schema). USE is a client command.
  • Reconnect after failure (kind of obvious, isn’t it?)
  • Connect to other servers (not so obvious to SQL Server DBAs). You can’t issue queries on other servers. Bummer.
  • Shutdown the server
  • Fork (you’re in a connection, you can’t issue a new connection from your own connection)

Continue reading » “Things that can’t (and some that can) be done from within a MySQL stored routine”

MySQL error handling on server side: a NO GO!

There is no reasonable way to catch and diagnose errors on server side. It is nearly impossible to know exactly what went wrong.

To illustrate, consider the following query:

INSERT INTO my_table (my_column) VALUES (300);

What could go wrong with this query?

  • We might hit a UNIQUE KEY violation
  • Or a FOREIGN KEY violation
  • my_column could be TINYINT UNSIGNED, and with strict sql_mode this makes for out-of-range
  • Or, similarly, it could be an ENUM (2,3,5,8)

Is that it? Not remotely:

  • This could be a read-only MyISAM table
  • We may have issued a LOCK TABLES my_table READ — this violates our lock
  • Or this could be an InnoDB table, and this INSERT would make for a deadlock
  • Or we have read_only=1 configuration
  • Or the user is not allowed access to this table
  • Or the table does not exist
  • Or the column does not exist

Or… I’m pretty sure there could be many other issues.

Now, if I write a Java program, perhaps using Hibernate, I get the error nicely wrapped up in a SQLException object, with easy access to error code and error message.

But can I have the same on server side? No.

Take a look at the following code: Continue reading » “MySQL error handling on server side: a NO GO!”

MySQL joins: ON vs. USING vs. Theta-style

What is the difference between the following three syntaxes?

SELECT * FROM film JOIN film_actor ON (film.film_id = film_actor.film_id)
SELECT * FROM film JOIN film_actor USING (film_id)
SELECT * FROM film, film_actor WHERE film.film_id = film_actor.film_id

The difference is mostly syntactic sugar, but with a couple interesting notes.

To put names, the first two are called “ANSI-style” while the third is called “Theta-style”.

Theta style

On the FROM clause, tables are listed as if with Cartesian products, and the WHERE clause specifies how the join should take place.

This is considered to be the “old” style. It is somewhat confusing to read. Consider the following query: Continue reading » “MySQL joins: ON vs. USING vs. Theta-style”

Notes on row based replication

MySQL’s Row Based Replication (RBR) succeeds (though not replaces) Statement Based Replication (SBR), as of version 5.1.

Anyone who is familiar with replication data drift — the unexplained growing data difference between master & slave — might wish to look into row based replication. On multiple servers I’m handling the change to RBR has eliminated (to the best of my findings) replication data drift.

This is easily explained, as RBR writes particular row IDs into the binary log. You no longer need to hope the statement

DELETE FROM my_table ORDER BY my_column LIMIT 100

acts deterministically on all servers (is my_column UNIQUE?). With row based replication the particular rows deleted on the master are then deleted on the slave.

Here are three notes on RBR: Continue reading » “Notes on row based replication”

DELETE, don’t INSERT

Have just read INSERT, Don’t DELETE by Aaron Brown, and have some lengthy response, which is why I write this post instead of commenting on said post.

I wish to offer my counter thought and suggest that DELETEs are probably the better choice.

Aaron suggests that, when one wishes to purge rows from some table, a trick can be used: instead of DELETEing unwanted rows, one can INSERT “good” rows into a new table, then switch over with RENAME (but please read referenced post for complete details).

I respectfully disagree on several points discussed.

Lockdown

The fact one needs to block writes during the time of creation of new table is problematic: you need to essentially turn off parts of your application. The posts suggests one could use a slave – but this solution is far from being trivial as well. To switch over, you yet again need to turn off access to DB, even if for a short while. Continue reading » “DELETE, don’t INSERT”