Making UUID() and RAND() replication safe

MySQL’s UUID() and RAND() functions both provide with (pseudo) indeterministic result. UUID()‘s result is moreover bound to the host on which it executes. For this reason, both are unsafe to replicate with STATEMENT binlog format. As an example, consider:

master> create table test.uuid_test (id int, u varchar(64));

master> insert into test.uuid_test values (1, UUID());
Query OK, 1 row affected, 1 warning (0.03 sec)

master> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e3596d8-56ac-11e4-b284-3c970ea31ea8 |
+------+--------------------------------------+

The warning we got on the insert directly relates to the following inconsistency on a slave:

slave1> select * from test.uuid_test;
+------+--------------------------------------+
| id   | u                                    |
+------+--------------------------------------+
|    1 | 7e379d63-56ac-11e4-8477-3c970ea31ea8 |
+------+--------------------------------------+

The data on the slave is clearly inconsistent with the master’s. The slave, replicating via STATEMENT binlog format, re-executes the INSERT command and gets a different UUID value.

External

One solution to the above is to generate the UUID value from your application. By the time MySQL gets the INSERT statement, the UUID value is a constant string, as far as MySQL is concerned.

Internal

However there’s a way to do it from within MySQL, by decoupling the UUID() function from the INSERT statement. It takes a session variable. Consider: Continue reading » “Making UUID() and RAND() replication safe”

Cheating mysqlsandbox to install MariaDB 10.0

mysqlsandbox is version-aware. The new 5.6 version, for example, requires special care because of the system InnoDB tables or otherwise modified system tables.

At this moment, it will refuse to install MariaDB 10.0 (alpha):

bash$ make_sandbox /tmp/mariadb-10.0.1-linux-x86_64.tar.gz 
unpacking /tmp/mariadb-10.0.1-linux-x86_64.tar.gz
unsupported version 10.0

This is perfectly legitimate, and I have no quarrel with this fact. However, I did want to setup MariaDB 10.0 as a sandbox.

As it turns out mysqlsandbox relies on MySQL package naming conventions to detect the version: the fact that a tgz file distribution is named mariadb-10.0.1-linux-x86_64.tar.gz and extracts onto mariadb-10.0.1-linux-x86_64, tells mysqlsandbox that this is version 10.0.1. Easy enough to cheat, then: pick a version that mysqlsandbox will work with, and which is compatible sandbox-wise with your own, and do the renames. For that matter, I picked 5.5, and, actually, made it 5.5.100: Continue reading » “Cheating mysqlsandbox to install MariaDB 10.0”

Looking for a hack: share data between MySQL sessions

I’m looking for a way to share data between two MySQL connections/sessions. Obviously tables are the trivial answer, but for reasons of security (possibly insufficient privileges) I wish to avoid that.

The type of data to be passed can vary. Ideally I would be able to pass multiple pieces of information (dates, texts, integers, etc.) around. If impossible, I would do with texts only, and if impossible yet, I could do with a single text (but reasonably long).

There is a way to do so: by writing to the file system (SELECT INTO OUTFILE + LOAD_FILE()). However I wish to avoid it, since writing to files from within MySQL requires creation of a new file each time; no overwrite and no purging; this litters the file system.

So: any other tricks? Is there some way to pass data via GET_LOCK()/RELEASE_LOCK() (none that I can see other than Morse code)?

Is there some global variable that is unused, can be changed dynamically, and has enough space? (I couldn’t find one)

I appreciate any input.

SQL: selecting top N records per group, another solution

A while back I presented SQL: selecting top N records per group, a “give me the top 5 countries in each continent” type of query, and which used an external numbers table and a lot of tedious casting.

Here’s another solution I came up with (*). Still using GROUP_CONCAT (how else?), but no external table and no casting. The query outputs the largest 5 countries (by surface area) per continent.

SELECT
  Continent,
  Name,
  SurfaceArea,
  Population
FROM
  world.Country,
  (
    SELECT 
      GROUP_CONCAT(top_codes_per_group) AS top_codes
    FROM
      (
        SELECT 
          SUBSTRING_INDEX(GROUP_CONCAT(Code ORDER BY SurfaceArea DESC), ',', 5) AS top_codes_per_group
        FROM
          world.Country
        GROUP BY
          Continent
      ) s_top_codes_per_group
  ) s_top_codes
WHERE
  FIND_IN_SET(Code, top_codes)
ORDER BY
  Continent,
  SurfaceArea DESC
;

+---------------+----------------------------------------------+-------------+------------+
| Continent     | Name                                         | SurfaceArea | Population |
+---------------+----------------------------------------------+-------------+------------+
| Asia          | China                                        |  9572900.00 | 1277558000 |
| Asia          | India                                        |  3287263.00 | 1013662000 |
| Asia          | Kazakstan                                    |  2724900.00 |   16223000 |
| Asia          | Saudi Arabia                                 |  2149690.00 |   21607000 |
| Asia          | Indonesia                                    |  1904569.00 |  212107000 |
| Europe        | Russian Federation                           | 17075400.00 |  146934000 |
| Europe        | Ukraine                                      |   603700.00 |   50456000 |
| Europe        | France                                       |   551500.00 |   59225700 |
| Europe        | Spain                                        |   505992.00 |   39441700 |
| Europe        | Sweden                                       |   449964.00 |    8861400 |
| North America | Canada                                       |  9970610.00 |   31147000 |
| North America | United States                                |  9363520.00 |  278357000 |
| North America | Greenland                                    |  2166090.00 |      56000 |
| North America | Mexico                                       |  1958201.00 |   98881000 |
| North America | Nicaragua                                    |   130000.00 |    5074000 |
| Africa        | Sudan                                        |  2505813.00 |   29490000 |
| Africa        | Algeria                                      |  2381741.00 |   31471000 |
| Africa        | Congo, The Democratic Republic of the        |  2344858.00 |   51654000 |
| Africa        | Libyan Arab Jamahiriya                       |  1759540.00 |    5605000 |
| Africa        | Chad                                         |  1284000.00 |    7651000 |
| Oceania       | Australia                                    |  7741220.00 |   18886000 |
| Oceania       | Papua New Guinea                             |   462840.00 |    4807000 |
| Oceania       | New Zealand                                  |   270534.00 |    3862000 |
| Oceania       | Solomon Islands                              |    28896.00 |     444000 |
| Oceania       | New Caledonia                                |    18575.00 |     214000 |
| Antarctica    | Antarctica                                   | 13120000.00 |          0 |
| Antarctica    | French Southern territories                  |     7780.00 |          0 |
| Antarctica    | South Georgia and the South Sandwich Islands |     3903.00 |          0 |
| Antarctica    | Heard Island and McDonald Islands            |      359.00 |          0 |
| Antarctica    | Bouvet Island                                |       59.00 |          0 |
| South America | Brazil                                       |  8547403.00 |  170115000 |
| South America | Argentina                                    |  2780400.00 |   37032000 |
| South America | Peru                                         |  1285216.00 |   25662000 |
| South America | Colombia                                     |  1138914.00 |   42321000 |
| South America | Bolivia                                      |  1098581.00 |    8329000 |
+---------------+----------------------------------------------+-------------+------------+

In bold are the conditions by which we nominate our selected rows (condition is SurfaceArea DESC, number of rows is 5, so 5 largest countries).

Continue reading » “SQL: selecting top N records per group, another solution”

Getting rid of huge ibdata file, no dump required, part II

This post continues Getting rid of huge ibdata file, no dump required, part I, where I describe way of converting your single-tablespace InnoDB database into a file-per-table one, without the pain of exporting and importing everything at once.

In previous part we put aside the issue of foreign keys. We address this issue now.

What if my InnoDB tables have foreign keys?

MyISAM does not support them, so you can’t just ALTER an InnoDB table to MyISAM and back into InnoDB, and expect everything to work.

Alas, this calls for additional steps (i.e. additional ALTER commands). However, these still fall well under the concept of “do it one table at a time, then take time to recover your breath and replication lag”.

Save , drop and restore your Foreign Keys setup

You can use common_schema‘s  sql_foreign_keys to get the full listing and create definition of your foreign keys. For example, assume we use the sakila database: Continue reading » “Getting rid of huge ibdata file, no dump required, part II”

Getting rid of huge ibdata file, no dump required

You have been told (guilty as charged), that the only way to get rid of the huge InnoDB tablespace file (commonly named ibdata1), when moving to innodb_file_per_table, is to do a logical dump of your data, completely erase everything, then import the dump.

To quickly reiterate, you can only delete the ibdata1 file when no InnoDB tables exist. Delete this file with an existing InnoDB table, even a table in its own tablespace, and nothing ever works anymore.

The problem with the dump-based solution

The impact of doing a logical dump is often overwhelming. Well, the dump may be tolerable, but the restore is much longer. The real pain is that you can’t do this one table at a time: you have to destroy everything before dropping the ibdata1 file; you then have to import everything.

Perhaps the most common scenario is that we do the changes on a slave, so as not to completely shut down our database. This is nice; no one is aware of the shutdown process. However, Huston, we have a problem: we need to make sure we can keep up the binary logs on the master for the duration of the entire process. Continue reading » “Getting rid of huge ibdata file, no dump required”

Auto caching INFORMATION_SCHEMA tables: seeking input

The short version

I have it all working. It’s kind of magic. But there are issues, and I’m not sure it should even exist, and am looking for input.

The long version

In Auto caching tables I presented with a hack which allows getting cached or fresh results via a simple SELECT queries.

The drive for the above hack was INFORMATION_SCHEMA tables. There are two major problems with INFORMATION_SCHEMA:

  1. Queries on schema-oriented tables such as TABLES, COLUMNS, STATISTICS, etc. are heavyweight. How heavyweight? Enough to make a lockdown of your database. Enough to crash down your database in some cases.
  2. The data is always generated on-the-fly, as you request it. Query the COLUMNS table twice, and risk two lockdowns of your database.

The auto-cache mechanism solves issue #2. I have it working, time based. I have an auto-cache table for each of the INFORMATION_SCHEMA heavyweight tables. Say, every 30 minutes the cache is invalidated. Throughout those 30 minutes, you get a free pass!

The auto-cache mechanism also paves the road to solving issue #1: since it works by invoking a stored routine, I have better control of the way I read INFORMATION_SCHEMA. This, I can take advantage of INFORMATION_SCHEMA optimization. It’s tedious, but not complicated.

For example, if I wanted to cache the TABLES table, I don’t necessarily read the entire TABLES data in one read. Instead, I can iterate the schemata, get a list of table names per schema, then read full row data for these, table by table. The result? Many many more SELECTs, but more optimized, and no one-big-lock-it-all query.

And the problem is…

Continue reading » “Auto caching INFORMATION_SCHEMA tables: seeking input”

Auto caching tables

Is there a way to create a caching table, some sort of a materialized view, such that upon selecting from that table, its data is validated/invalidated?

Hint: yes.

But to elaborate the point: say I have some table data_table. Can I rewrite all my queries which access data_table to read from some autocache_data_table, but have nothing changed in the query itself? No caveats, no additional WHEREs, and still have that autocache_data_table provide with the correct data, dynamically updated by some rule of our choice?

And: no crontab, no event scheduler, and no funny triggers on data_table? In such way that invalidation/revalidation occurs upon SELECT?

Well, yes.

This post is long, but I suggest you read it through to understand the mechanism, it will be worthwhile.

Background

The following derives from my long research on how to provide better, faster and safer access to INFORMATION_SCHEMA tables. It is however not limited to this exact scenario, and in this post I provide with a simple, general purpose example. I’ll have more to share about INFORMATION_SCHEMA specific solutions shortly.

I was looking for a server side solution which would not require query changes, apart from directing the query to other tables. Solution has to be supported by all standard MySQL installs; so: no plugins, no special rebuilds. Continue reading » “Auto caching tables”

More MySQL foreach()

In my previous post I’ve shown several generic use cases for foreach(), a new scripting functionality introduced in common_schema.

In this part I present DBA’s handy syntax for schema and table operations and maintenance.

Confession: while I love INFORMATION_SCHEMA‘s power, I just hate writing queries against it. It’s just so much typing! Just getting the list of tables in a schema makes for this heavy duty query:

SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='sakila' AND TABLE_TYPE='BASE TABLE';

When a join is involved this really becomes a nightmare. I think it’s cumbersome, and as result, many do not remember the names and meaning of columns, making for “oh, I need to read the manual all over again just to get that query right”. Anyway, that’s my opinion.

A SHOW TABLES statement is easier to type, but cannot be integrated into a SELECT query (though we have a partial solution for that, too), and besides, when filtering out the views, the SHOW statement becomes almost as cumbersome as the one on INFORMATION_SCHEMA.

Which is why foreach() offers handy shortcuts to common iterations on schemata and tables, as follows:

Use case: iterate all databases

call foreach('schema', 'CREATE TABLE ${schema}.event(event_id INT, msg VARCHAR(128))');

In the above we execute a query on each database. Hmmm, maybe not such a good idea to perform this operation on all databases? Let’s filter them:

Use case: iterate databases by name match

call foreach('schema like wordpress_%', 'ALTER TABLE ${schema}.wp_posts MODIFY COLUMN comment_author VARCHAR(96) NOT NULL');

The above will only iterate my WordPress databases (I have several of these), performing an ALTER on wp_posts for each of those databases. Continue reading » “More MySQL foreach()”

MySQL foreach()

A new routine is now available in common_schema, which makes for an easier execution syntax for some operations:

foreach(collection_to_iterate_over, queries_to_execute_per_iteration_step);

To illustrate what it can do, consider:

call foreach('table in sakila', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB ROW_FORMAT=COMPACT');

call $('schema like shard_%', 'CREATE TABLE ${schema}.messages (id INT)');

call $('2000:2009', 'INSERT IGNORE INTO report (report_year) VALUES (${1})');

$() stands as a synonym to foreach(). I suspect it should look familiar to web programmers.

The idea for foreach() was introduced by Giuseppe Maxia during a correspondence. At first I was skeptic: this isn’t jQuery; this is SQL. Why would I want to use foreach()?

Then Giuseppe provided some use cases, and as I started thinking about it, I found more and more cases where such a tool might considerably reduce scripting overhead and avoid requiring SQL-fu skills. In fact, I have been using it myself for the past few weeks

I provide examples which I hope will convince the reader as for the simplicity of using such syntax. Showing off the types of input foreach() accepts (query, table search, schema search, set of constants, single or double numbers range), and the types of queries it can execute (single, multiple, using placeholders).

I stress that this is not a replacement for common queries (i.e. does not come to replace your common SELECT, UPDATE, DELETE), but more for working out administrative tasks. Nevertheless, the last example in this post does provide with an interesting insight on possible “normal” use. Continue reading » “MySQL foreach()”