common_schema 2.1 released: advanced & improved split(), persistent script tables, more schema analysis, and (ahem) charts!

July 17, 2013

common_schema 2.1 is released! common_schema is your free & open source companion schema within your MySQL server, providing with a function library, scripting capabilities, powerful routines and ready-to-apply information and recommendations.

New and noteworthy in version 2.1:

  • Better QueryScript's split() functionality
  • Persistent tables for QueryScript: no long held temporary tables
  • Index creation analysis, further range partition analysis
  • grant_access(): allow everyone to use common_schema
  • Ascii charts, google charts
  • debugged_routines: show routines with debug code

Other minor enhancements and bugfixes not listed.

Here's a breakdown of the above:

split() enhancements

split is one of those parts of common_schema that (should) appeal to every DBA. Break a huge transaction automagically into smaller chunks, and don't worry about how it's done. If you like, throttle execution, or print progress, or...

split enhancements include:

  • A much better auto-detection-and-selection of the chunking index. split now consults all columns covered by the index, and uses realistic heuristics to decide which UNIQUE KEY on your table is best for the chunking process. A couple bugs are solved on the way; split is much smarter now.
  • Better support for multi-column chunking keys. You may now utilize the start/stop parameters even on multi column keys, passing a comma delimited of values for the split operation to start/end with, respectively. Also fixed issue for nonexistent start/stop values, which are now valid: split will just keep to the given range.
  • split no longer requires a temporary table open through the duration of its operation. See next section.

Persistent script tables

QueryScript used to use several temporary tables for its operation. Thus, a script could hold open two or three temporary tables for the entire execution duration. For long split operations, for example, this could mean hours and days.

Temporary tables are nice and quick to respond (well, MyISAM tables are, until MySQL 5.7 is out), but make for an inherent problem: stopped slaves must not shut down nor restart when replication has an open temporary table. Why? Well, because the slave forgets about the temporary tables. When it resumes operation, it will not recognize DML issued against those tables it has forgotten. That's why oak-prepare-shutdown is so good for slaves.

When temporary tables are short-lived, this is typically not an issue. But if you are not allowed to restart your slave throughout a 24 hour operation, that's a limitation.

As of 2.1, QueryScript does not require long held temporary tables. In fact, typical scripts do not create any temporary tables. A split operation creates and immediately drops a series of temporary tables. These are dropped even before actual split operation begins. All tables operated on are persistent InnoDB tables.

Result: safer script replication. There's another nice side effect I may take advantage of in a later release: ability to monitor and control flow of concurrent scripts.

Schema analysis

Two noteworthy additions to schema analysis views:

  • sql_alter_table now includes the sql_drop_keys & sql_add_keys columns. For each table, you get the SQL statements to create and drop the existing indexes. I developed this when I hit this problem with TokuDB.
  • sql_range_partitions now includes the count_past_partitions & count_future_partitions; when your table is partitioned by some type of time range, these views tell you how many partitions are in the past, and how many are to be written to in the future. This turns useful when you want to rotate or otherwise set a retention policy for your range partitions.


The grant_access() routine GRANTs all accounts on your server with SELECT & EXECUTE privileges on common_schema. This is a quick complementary to the installation process (though you have to invoke it yourself; it's up to you).

Ascii/google line charts

Laugh all you want! And find how cool it is to get (poor man's) instant charting like this:

mysql> call line_chart('select ts, com_insert_psec, com_update_psec from mycheckpoint.sv_hour limit 100', 'insert per second, update per second') ;
| y_scale | chart                                                                                                |
| 162     | -#-------------------------------------------------------------------------------------------------- |
| 152     | ---------------------------------------------------------------------------------------------------- |
| 143     | ---------------------------------------------------------------------------------------------------- |
| 134     | ---------------------------------------------------------------------------------------------------- |
| 124     | ---------------------------------------------------------------------------------------------------- |
| 115     | ------------------------------------------------------------#--------------------------------------- |
| 106     | ---------------------------------------------------------------------------------------------------- |
| 96      | -*-------------------------------------------------------------------------------------------------- |
| 87      | ---------------------------------#-------------------------#---------------------------------------- |
| 77      | ---------------------------------------------------------------------------------#------------------ |
| 68      | ---------------------------------------------------------------------------#------------------------ |
| 59      | #-------------------------------#------------------------------------------------------------------- |
| 49      | ---##------#-#-##-#-#--#--###----------------##---------------------------------#-----#---###------- |
| 40      | --#------#--#-#--#-#-##-##----##--###########--######--------#############-*#-##--####-###---####### |
| 31      | *-**--#-#-*-**-**------**--**#-****-**-*****-*******-#---#-*------------**---#--*------------------- |
| 21      | ----*#*#*--*--*--******--**--**----*--*-----*-------**#*#**-************--#-****-******************* |
| 12      | -----*-*-*--------------------------------------------*-*-----------------*------------------------- |
|         | v::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::v |
|         | 2010-10-06 20:00:00                                                              2010-10-10 23:00:00 |
|         |     # insert per second                                                                              |
|         |     * update per second                                                                              |

You can get the same in Google Image Charts format. Yes, it's deprecated (and has been for a year -- it's still working)

mysql> call google_line_chart('select ts, com_insert_psec, com_update_psec from mycheckpoint.sv_hour limit 100', 'insert per second, update per second') \G


The above translates into the following image:

Throw you own query in. Make 1st column your ordering column, 2nd [, 3rd...] value columns. Provide your own legend. Watch it instantly. And laugh all you want.

Read more about common_schema's charting routines.


The new debugged_routines view shows you which routines are currently "compiled with debug mode".

I will write more on the state of rdebug in a future post.

Try it, get it

  • common_schema 2.1 comes with over 500 tests and fast growing.
  • It supports MySQL 5.1, 5.5, 5.6, Percona Server and MariaDB.
  • It has superb documentation (may I say so?) with a lot of examples & drill down into edge cases.

You are free to download and use it.

Your feedback is welcome! Indeed, many of this version's improvements originated with community feedback.

  • Pingback: Framework para DBA en MySQL - Manejando datos()

  • Great!!
    I have a post talking about your framework, in spanish.

    Here you have the link:

  • HB

    A quick question about extract_json_value, it is said it should be used with care because of performance hits, when using MySQL through other backend like PHP or ASP.NET would you advice rather getting the whole raw resultset data and decoding JSON avoiding common_schema? have you run any benchmark comparing one way over the other?

    Sorry if this isn't the right place to ask, but I'm new to common_schema, didn't see any message board related to it, and I'm right now over a restricted mobile network connection.

  • @HB
    It depends on whether you need to filter your rows based on a JSON value. If not, there's no point in using common_schema for that.

    At any case benchmarking is on you: but do come back and let us know how goes!

Powered by Wordpress and MySQL. Theme by