common_schema: 1.3: security goodies, parameterized split(), json-to-xml, query checksum

common_schema 1.3 is released and is available for download. New and noteworthy in this version:

  • Parameterized split(): take further control over huge transactions by breaking them down into smaller chunks, now manually tunable if needed
  • duplicate_grantee(): copy+paste existing accounts along with their full set of privileges
  • similar_grants: find which accounts share the exact same set of privileges (i.e. have the same role)
  • json_to_xml(): translate any valid JSON object into its equivalent XML form
  • extract_json_value(): use XPath notation to extract info from JSON data, just as you would from XML
  • query_checksum(): given a query, calculate a checksum on the result set
  • random_hash(): get a 40 hexadecimal digits random hash, using a reasonably large changing input

Let’s take a closer look at the above:

Parameterized split()

split takes your bulk query and automagically breaks it down into smaller pieces. So instead of one huge UPDATE or DELETE or INSERT..SELECT transaction, you get many smaller transactions, each with smaller impact on I/O, locks, CPU.

As of 1.3, split() gets more exposed: you can have some control on its execution, and you also get a lot of very interesting info during operation.

Here’s an example of split() control:

set @script := "
  split({start:7015, step:2000} : UPDATE sakila.rental SET return_date = return_date + INTERVAL 1 DAY) 
    throttle 1;
";
call common_schema.run(@script);

In the above we choose a split size of 2,000 rows at a time; but we also choose to only start with 7015, skipping all rows prior to that value. Just what is that value? It depends on the splitting key (and see next example for just that); but in this table we can safely assume this is the rental_id PRIMARY KEY of the table.

You don’t have to use these control parameters. But they can save you some time and effort. Continue reading » “common_schema: 1.3: security goodies, parameterized split(), json-to-xml, query checksum”

Hierarchical data in INFORMATION_SCHEMA and derivatives

Just how often do you encounter hierarchical data? Consider a table with some parent-child relation, like the this classic employee table:

CREATE TABLE employee (
  employee_id INT UNSIGNED PRIMARY KEY,
  employee_name VARCHAR(100),
  manager_id INT UNSIGNED,
  CONSTRAINT `employee_manager_fk` FOREIGN KEY (manager_id) REFERENCES employee (employee_id)
) engine=innodb
;
+-------------+---------------+------------+
| employee_id | employee_name | manager_id |
+-------------+---------------+------------+
|           1 | Rachel        |       NULL |
|           2 | John          |          1 |
|           3 | Stan          |          1 |
|           4 | Naomi         |          2 |
+-------------+---------------+------------+

Questions like “What is the (nested) list of employees managed by Rachel?” or “Get Naomi’s managers chain of command” are classical questions. There are sometimes dependencies: if John leaves, does that mean Naomi loses her job, or does she get promoted, or something else? If John and Stan are on sick leave, does Rachel have any reason to come to work?

Hierarchical data is not limited to a single-table structure. Sometimes it takes a combination of a few tables (it’s just a JOIN) to make out the parent-child relation.

Hierarchical data is difficult to manage with SQL. This is especially true for MySQL, which does not support the WITH recursive query syntax.

Where can you find hierarchical data?

Even if you do not provide it by yourself, MySQL’s INFORMATION_SCHEMA has some for you. Partly obvious, partly implicit, here are some examples: Continue reading » “Hierarchical data in INFORMATION_SCHEMA and derivatives”

Percona Live 2013 schedule released!

I’m happy to report the release of the program schedule for the next Percona Live event, April 2013.

While you can see there are still some “TODO”s, and minor scheduling changes still to go, the overall reviewing process is complete, and the schedule is sound.

The committee has reviewed over 250 submissions in total, sessions and tutorials. It took two months to review the sessions and finalize the schedule. There were very good submissions, and it wasn’t till I started building the schedule that I realized just how many.

You know the feeling: you’re at a conference, and there’s this time slot where you just can’t make up your mind which session to attend. As it turns out, anyway I look at it, I get this at each and every single time slot in this schedule. Hopefully all attendees should feel similarly.

What can you expect to find in the Santa Clara, April conference?

As always, we get variety of topics. First day is tutorials, which I already mentioned, followed by three days of sessions and keynotes.

  • Some topics you can guess: always place for good backup practice, performance optimizations for your InnoDB deployment, good indexing strategies and more.
  • Open source and emerging technologies are well covered: from Tungsten, Galera/Percona XtraDB Cluster, NDB Cluster to MHA, HAProxy, Hadoop, Percona Toolkit and more.
  • MariaDB has good coverage, as well as Percona Server.
  • Scale topics such as sharding, building large deployments,Big Data.
  • Cloud is well represented, too. Deployments on Amazon, Google Cloud, OpenStack, SkySQL Cloud Data Suite; interesting content on virtualization.
  • Optimizer, better queries, query evaluation plans.
  • Replication, of course, is as always a hot topic, this year in particular due to anticipated 5.6 release
  • Speaking of 5.6, a lot of content on that, too. PERFORMANCE_SCHEMA, InnoDB, partitions, GIS and more…
  • Case studies: can you do without them? Get to hear how big companies used product/project X, what it took to make it work, how it helped them grow.
  • Chance to get up to speed on third party solutions: Akiban, Tokutek, ScaleDB…
  • Hardware: SSDs are all the rage. How do you nest choose and configure your hardware and deployment?
  • Best practices, security, monitoring, NoSQL…
  • And, I can’t list everything, but a LOT of good, unique content!

I will write some more on specific sessions I find that are of unique interest.

Finally, this is my first time acting as committee chairman, and so much of this work was new to me, and I got to learn it on the fly. Thankfully, the committee is composed of experienced conference veterans, not to mention experts, who gave good advice. I was happy to ask for their advice and happy to accept. I wish to thank the committee members for their kindness and openness! (That doesn’t mean I’m buying you all your beer though!)

State of InnDB Online DDL in MySQL 5.6.9-RC (good news included)

5.6.9-RC is out, and I was curious to see how the online DDL has improved since my 5.6.8 review. I also owe James Day this review, since he came up with results inconsistent with my own.

We both agreed the dataset I was using was too small, but I got similar results even on larger scale. Then some time passed, and 5.6.9 was announced.

So for the 5.6.9 test I took one of my real tables on production. It is not extremely large: it’s a ~ 300MB .ibd file, in the following format:

mysql> show create table tbl \G

CREATE TABLE `tbl` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `a` varchar(255) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `w` smallint(11) NOT NULL DEFAULT '0',
  `d` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `icount` smallint(5) unsigned NOT NULL DEFAULT '0',
  PRIMARY KEY (`id`) KEY_BLOCK_SIZE=8,
  UNIQUE KEY `u_idx` (`a`,`w`,`d`) KEY_BLOCK_SIZE=8,
  KEY `d` (`d`) KEY_BLOCK_SIZE=8
) ENGINE=InnoDB AUTO_INCREMENT=16960441 DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=16

Got some 2.5M rows in the table; desktop machine, 64 bit Linux, mysqlsandbox.

I have crossed several DDL statements with several DML statements. The DDL statements in this test are (ALTER TABLE…): Continue reading » “State of InnDB Online DDL in MySQL 5.6.9-RC (good news included)”

Notes on “5 Tips for migrating your MySQL server to a Galera Cluster”

This post comes as response to 5 Tips for migrating your MySQL server to a Galera Cluster by Henrik Ingo (Galera), which is a good read. This post is a bit lengthy which is one of the reasons why I didn’t just comment on said post.

The Galera talk in London was indeed well crowded, judging from my own chair. I think there is an obvious interest in Galera, as well as a general thirst for information from real-life experience. I am personally very interested in production stories; either “war stories” or “peace stories” – anything to shed more light on the adoption of this technology.

For disclosure, I have not tried Galera myself as yet. Not on production nor on staging machines. But here’s a couple thoughts on Henrik’s post: Continue reading » “Notes on “5 Tips for migrating your MySQL server to a Galera Cluster””

common_schema over traditional scripts

If you are familiar with both openark kit and common_schema, you’ll notice I’ve incorporated some functionality already working in openark kit into common_schema, essentially rewriting what used to be a Python script into SQL/QueryScript.

What was my reasoning for rewriting good code? I wish to explain that, and provide with a couple examples.

I’m generally interested in pushing as much functionality into the MySQL server. When using an external script, one:

  • Needs the right dependencies (OS, Perl/Python version, Perl/Python modules).
  • Needs to provide with connection params,
  • Needs to get acquainted with a lot of command line options,
  • Is limited by whatever command line options are provided.
  • Has to invoke that script (duh!) to get the work done.

This last bullet is not so trivial: it means you can’t work some operation with your favorite GUI client, because it has no notion of your Perl script; does not run on the same machine where your Python code resides; simply can’t run those scripts for you.

With server-side code, functionality is accessible via any client. You run your operation via a query (e.g. CALL some_procedure). That can be done from your GUI client, your command line client, your event scheduler, your cronjob, all equally. You only need access to your MySQL server, which is trivial.

Of course, server side scripting is limited. Some stuff simply can’t be written solely on server side. If you want to consult your replicating slave; gracefully take action on user’s Ctrl+C, send data over the web, you’ll have to do it with an external tool. There are actually a lot of surprising limitations to things one would assume are possible on server side. You may already know how frustrated I am by the fact one can hardly get info from SHOW commands.

But, when it works, it shines

Let’s review a couple examples. The first one is nearly trivial. The second less so. Continue reading » “common_schema over traditional scripts”

Percona Live 2013: initial list of sessions

Committee‘s reviewing process for session proposals for Percona Live 2013 conference is now complete. It is now time for scheduling, checking, double checking, organizing, categorizing and publishing the session schedule. This will take some time. However, we’ve put together a list of 10 sessions already approved.

Now, just building a list of 10 sessions is quite the task: the sessions must be rated high, present a broad range of topics, target audience, speakers… This list is not necessarily “the top 10 sessions you will ever see in the conference”, but rather “a list of 10 great sessions you might want to see in the conference”.

After all, any single attendee cannot see all possible sessions; but we sure want the attendee to have a great selection of talks!

So, without further ado, here are 10 approved sessions for the Percona Live MySQL Conference & Expo 2013.

Killing InnoDB idle transactions

The issue of terminating long-time idle open InnoDB transaction has been discussed recently by many. I wish to add my share, by proposing a quick and clean solution via common_schema.

common_schema 1.2 provides with the innodb_transactions view, which relies on INNODB_TRX – one of the InnoDB Plugin views in INFORMATION_SCHEMA – as well as on PROCESSLIST, and so is able to determine with certainty that a transaction has been idle for a long time.

innodb_transactions offers us with a sql_kill_query column, which produces a ‘KILL QUERY 12345’ type of value. So we can:

SELECT sql_kill_query FROM innodb_transactions WHERE trx_idle_seconds >= 10; 
+-------------------+
| sql_kill_query    |
+-------------------+
| KILL QUERY 292509 |
| KILL QUERY 292475 |
+-------------------+ 

common_schema‘s useful eval() routine allows us to actually invoke those KILL statements, all in a one-liner:

call eval('SELECT sql_kill_query FROM innodb_transactions WHERE trx_idle_seconds >= 10');

Technical details Continue reading » “Killing InnoDB idle transactions”

Speaking in London: common_schema, DBA’s framework for MySQL

I’m happy to be speaking about common_schema at Percona Live London, Dec 4th, 2012. This will be the first chance for me to speak about common_schema, what I believe to be an extremely useful companion to your MySQL server.

I will show you how common_schema can leverage your SQL skills and give you new powers. Stuff you had to be SQL-Perl-super-expert to work through is easily attainable with common_schema and QueryScript. Some maintenance, security and auditing tasks are now a breeze. Writing server side scripts can be fun!

What I won’t do:

  • Read the fancy manual aloud
  • Walk you through every possible view, column, function, parameter and script

What I will do:

  • Explain the nature of common_schema, show you the concepts behind it (and it is built with concepts in mind)
  • Provide with selected examples (schema analysis, security, processes & transactions) — there’s much to go by, but no boring drill down into all details.
  • Expose some cool hacks with QueryScript
  • Do very brief (as time allows) live demos.

common_schema is an open source project and released under the BSD license.

Last April Roland Bouman gave a great talk about common_schema. This will not be the same talk; a lot has been developed by this time.

I’ll be speaking at a very challenging time slot; the three other talks running concurrent to my own are all great ones; I actually wanted to attend all three of them. Heck, Peter wants to (though contrary to belief he can’t fork() himself). So, no hard feelings if you choose another talk during this time slot. But I do know where you live.