common_schema talk at Percona Live

Are you attending PerconaLive?

Allow me to suggest you attend the Common Schema: a framework for MySQL server administration session on April 12, 14:00 – 14:50 @ Ballroom F.

This talk is by none other than Roland Bouman. Roland co-authored parts of common_schema, and is a great speaker.

I have a personal interest, of course, being the author of most of the components in common_schema. I would like to convert you to a supporter of this project. I know a few very smart people who think this project is an important tool. I would like more people to get to know it. Eventually, I would like developers and DBAs alike to consider it an inseparable part of any MySQL installation.

Then I shall have world domination, Bwa ha ha!

PS,

Have fun, I will unfortunately not attend myself this year. Having been on the program committee, I can tell it’s going to be a great conference!

Thoughts on using MySQL plugins

I’m giving thoughts on the viability of MySQL plugins. This is due to a particular experience I’ve had, which is thankfully solved. However, it left some bitter taste in my mouth.

MySQL plugins are a tricky business. To create a plugin, you must compile it against the MySQL version you wish the users to use it with. Theoretically, you should compile it against any existing MySQL version, minors as well (I’m not sure whether it may sometimes or most times work across minor versions).

But, most important, you must adapt your plugin to major versions.

Another option for plugin makers, is to actually not recompile it, but rather provide with the source code, and let the end user compile it with her own MySQL version. But here, too, the code must be compatible with whatever changes the new MySQL version may have.

And if it doesn’t compile with the new MySQL version?

That’s what happened to me. The particular case at hand was SphinxSE, a plugin which serves as a bridge between MySQL and a Sphinx Search server. I’ve been using it for years and was happy about it. But, as it happened, it took well over a year for sphinx to compile with MySQL 5,5. This meant I was unable to upgrade my 5.1 installation to 5.5, a thing I was aiming to do for quite a while. Continue reading » “Thoughts on using MySQL plugins”

sphinx, sphinx_snippets() & MySQL 5.5

I’ve written a patch which completes Sphinx’s integration with MySQL 5.5.

Up until a couple months ago, Sphinx would not compile with MySQL 5.5 at all. This is, thankfully, resolved as of Sphinx 2.0.3.

However, to my disdain, I’ve found out that it only partially work: the sphinx_snippets() user defined function is not included within the plugin library. After some quick poking I discovered that it was not added to the build, and when added, would not compile.

I rely on sphinx_snippets() quite a lot, and like it. Eventually I wrote the fix to the snippets_udf.cc which allows it to run in a MySQL 5.5 server.

Here are the changes for the 2.0.4 version of Sphinx:

Replace your 2.0.4 files with these two and get on compiling your MySQL server.

Compilation guide

For completeness, here’s how to compile Percona Server 5.5 with Sphinx 2.0.4 including the above patches: Continue reading » “sphinx, sphinx_snippets() & MySQL 5.5”

Auto caching INFORMATION_SCHEMA tables: seeking input

The short version

I have it all working. It’s kind of magic. But there are issues, and I’m not sure it should even exist, and am looking for input.

The long version

In Auto caching tables I presented with a hack which allows getting cached or fresh results via a simple SELECT queries.

The drive for the above hack was INFORMATION_SCHEMA tables. There are two major problems with INFORMATION_SCHEMA:

  1. Queries on schema-oriented tables such as TABLES, COLUMNS, STATISTICS, etc. are heavyweight. How heavyweight? Enough to make a lockdown of your database. Enough to crash down your database in some cases.
  2. The data is always generated on-the-fly, as you request it. Query the COLUMNS table twice, and risk two lockdowns of your database.

The auto-cache mechanism solves issue #2. I have it working, time based. I have an auto-cache table for each of the INFORMATION_SCHEMA heavyweight tables. Say, every 30 minutes the cache is invalidated. Throughout those 30 minutes, you get a free pass!

The auto-cache mechanism also paves the road to solving issue #1: since it works by invoking a stored routine, I have better control of the way I read INFORMATION_SCHEMA. This, I can take advantage of INFORMATION_SCHEMA optimization. It’s tedious, but not complicated.

For example, if I wanted to cache the TABLES table, I don’t necessarily read the entire TABLES data in one read. Instead, I can iterate the schemata, get a list of table names per schema, then read full row data for these, table by table. The result? Many many more SELECTs, but more optimized, and no one-big-lock-it-all query.

And the problem is…

Continue reading » “Auto caching INFORMATION_SCHEMA tables: seeking input”

Auto caching tables

Is there a way to create a caching table, some sort of a materialized view, such that upon selecting from that table, its data is validated/invalidated?

Hint: yes.

But to elaborate the point: say I have some table data_table. Can I rewrite all my queries which access data_table to read from some autocache_data_table, but have nothing changed in the query itself? No caveats, no additional WHEREs, and still have that autocache_data_table provide with the correct data, dynamically updated by some rule of our choice?

And: no crontab, no event scheduler, and no funny triggers on data_table? In such way that invalidation/revalidation occurs upon SELECT?

Well, yes.

This post is long, but I suggest you read it through to understand the mechanism, it will be worthwhile.

Background

The following derives from my long research on how to provide better, faster and safer access to INFORMATION_SCHEMA tables. It is however not limited to this exact scenario, and in this post I provide with a simple, general purpose example. I’ll have more to share about INFORMATION_SCHEMA specific solutions shortly.

I was looking for a server side solution which would not require query changes, apart from directing the query to other tables. Solution has to be supported by all standard MySQL installs; so: no plugins, no special rebuilds. Continue reading » “Auto caching tables”

MySQL monitoring: storing, not caching

I’ve followed with interest on Baron’s Why don’t our new Nagios plugins use caching? and Sheeri’s Caching for Monitoring: Timing is Everything. I wish to present my take on this, from mycheckpoint‘s point of view.

So mycheckpoint works in a completely different way. On one hand, it doesn’t bother with caching. On the other hand, it doesn’t bother with re-reads of data.

There are no staleness issues, the data is consistent as it can get (you can never get a completely atomic read of everything in MySQL), and you can issue as many calculations as you want at the price of one take of monitoring. As in Sheere’s example, you can run Threads_connected/max_connections*100, mix status variables, system variables, meta-variables (e.g. Seconds_behind_master), user-created variables (e.g. number of purchases in your online shop) etc.

mycheckpoint‘s concept is to store data. And store it in relational format. That is, INSERT it to a table.

A sample-run generates a row, which lists all status, server, OS, user, meta variables. It’s a huge row, with hundreds of columns. Columns like threads_connected, max_connections, innodb_buffer_pool_size, seconds_behind_master, etc.

mycheckpoint hardly cares about these columns. It identifies them dynamically. Have you just upgraded to MySQL 5.5? Oh, there’s a new bunch of server and status variables? No problem, mycheckpoint will notice it doesn’t have the matching columns and will add them via ALTER TABLE. There you go, now we have a place to store them.

Running a formula like Threads_connected/max_connections*100 is as easy as issuing the following query:

SELECT Threads_connected/max_connections*100 FROM status_variables WHERE id = ...

Hmmm. This means I can run this formula on the most recent row I’ve just added. But wait, this also means I can run this formula on any row I’ve ever gathered. Continue reading » “MySQL monitoring: storing, not caching”

Call for Nominations for 2012 MySQL Community Awards

This post complements Henrik’s Call for Nominations for 2012 MySQL Community Awards.

Recap: we keep the tradition of awarding MySQL community members for their notable contributions to the MySQL ecosystem.

Previously, the awards were given by MySQL AB/Sun. Later on they were given by the community itself, as will follow this year, when the awards are presented during the Percona Live MySQL Conference & Expo in Santa Clara, this April.

Henrik describes in details the three categories: community contributor, application, corporate contributer -of the year.

A bit more about the categories

To add to Henrik’s description of the categories, keep in mind the following:

  • Community member would be a person. That person could be working by himself/herself, or can be part of some corporate. It does not matter. What matters is the person’s contribution.
  • Application: some code or project which either enhance/complement MySQL (e.g. Replication/HA solution) or uses MySQL. In the latter case, it is important that MySQL role’s in the application is significant. For example, some popular site built with some CMS using MySQL may not qualify, if it could run just the same with PostgreSQL or other databases, or if the owners are not actually aware or at all mindful about the specific database technology they are using.
  • Corporate: we’re still figuring this one out. The general idea is to acknowledge a corporate which, some way or the other, did a good thing to the MySQL ecosystem or the community.

Continue reading » “Call for Nominations for 2012 MySQL Community Awards”

MySQL/QueryScript use case: DELETE all but top N records per group

Some administrative tasks can be simplified by using common_schema/QueryScript. I’m collecting a bunch of these for documentation. Here’s one for example:

The DBA/developer has the task of retaining only top 3 most populated countries per continent. That is, she has to DELETE 4th, 5th, 6th, … most populated counties in each continent.

Is it possible to work out with a single query? Yes. But the query is not pretty. In fact, it is quite complicated, and either involves unintuitive subqueries, or unintuitive hacks. A normal DBA would not want to write, neither maintain this kind of query, unless top-notch-geek, which is fine.

Since this is a one time job, we just need to get it done. And common_schema/QueryScript provide with the intuitive solution: if we read our demand aloud, we realize we want to delete 4th, 5th, 6th, … populated countries for each continent.

I present a solution made available by QueryScript, and discuss the ways in which the code overcomes limitations, or simplifies complexity:

var $num_countries_to_delete;
foreach($continent, $num_countries: SELECT continent, COUNT(*) FROM world.Country GROUP BY continent)
{
  if ($num_countries > 3)
  {
    set $num_countries_to_delete := $num_countries - 3;
    DELETE FROM world.Country WHERE Continent = $continent ORDER BY Population ASC LIMIT :$num_countries_to_delete;
  }
}

Discussion

The first thing that should be apparent from the above is that this is a programmatic solution. Queries are declarative, which is why complex ones sometimes look incomprehensible. The above is more straightforward. Continue reading » “MySQL/QueryScript use case: DELETE all but top N records per group”

common_schema rev. 218: QueryScript, throttling, processes, documentation

common_schema, revision 218 is released, with major new features, top one being server side scripting. Here are the highlights:

  • QueryScript: server side scripting is now supported by common_schema, which acts as an interpreter for QueryScript code.
  • Throttling for queries is now made available via the throttle() function.
  • Enhancements to processlist-related views, including the new slave_hosts view.
  • Inline documentation/help is available via the help() routine.
  • more…

QueryScript

common_schema makes for a QueryScript implementation for MySQL. You can run server side scripts, interpreted by common_schema, which allow for easy syntax and greater power than was otherwise previously available on the MySQL server. For example:

foreach($table, $schema, $engine: table like '%')
  if ($engine = 'ndbcluster')
    ALTER ONLINE TABLE :$schema.:$table REORGANIZE PARTITION;

QueryScript includes flow control, conditional branching, variables & variable expansion, script throttling and more.

Read more on common_schema’s QueryScript implementation.

Continue reading » “common_schema rev. 218: QueryScript, throttling, processes, documentation”

QueryScript: SQL scripting language

Introducing QueryScript: a programming language aimed for SQL scripting, seamlessly combining scripting power such as flow control & variables with standard SQL statements or RDBMS-specific commands.

QueryScript is available fro MySQL via common_schema, which adds MySQL-specific usage.

What does QueryScript look like? Here are a few code samples:

Turn a bulk DELETE operation into smaller tasks. Throttle in between.

while (DELETE FROM archive.events WHERE ts < CURDATE() LIMIT 1000)
{
  throttle 2;
}

Convert all InnoDB tables in the ‘sakila’ database to compressed format:

foreach ($table, $schema, $engine: table in sakila)
{
  if ($engine = 'InnoDB')
    ALTER TABLE :$schema.:$table ENGINE=InnoDB ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
}

Shard your data across multiple schemata:

foreach($shard: {USA, GBR, JAP, FRA})
{
  CREATE DATABASE db_:$shard;
  CREATE TABLE db_:$shard.city LIKE world.City;
  INSERT INTO db_:$shard.city SELECT * FROM world.City WHERE CountryCode = $shard;
}

Continue reading » “QueryScript: SQL scripting language”