openark kit – code.openark.org http://shlomi-noach.github.io/blog/ Blog by Shlomi Noach Wed, 26 Jun 2013 19:46:57 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.3 32412571 common_schema & openark-kit in the media: #DBHangOps, OurSQL https://shlomi-noach.github.io/blog/mysql/common_schema-openark-kit-in-the-media-dbhangops-oursql https://shlomi-noach.github.io/blog/mysql/common_schema-openark-kit-in-the-media-dbhangops-oursql#respond Wed, 26 Jun 2013 19:46:57 +0000 https://shlomi-noach.github.io/blog/?p=6393 #DBHangOps

I had the pleasure of joining into @DBHangOps today, and speak about common_schema and openark-kit. What was meant to be a 15 minute session turned to be 50 — sorry, people, I don’t talk as much at home, but when it comes to my pet projects…

I also realized I was missing on a great event: DBHangOps is a hangout where you can chat and discuss MySQL & related technologies with friends and colleagues, with whom you typically only meet at conferences. I will certainly want to attend future events.

Thanks to John Cesario and Geoffrey Anderson who invited me to talk, and to the friends and familiar faces who attended; I was happy to talk about my work, and very interested in hearing about how it’s being put to use. We also had time to discuss ps_helper with no other than Mark Leith!

The video is available on Twitter/YouTube.

OurSQL

openark-kit has also been featured on the OurSQL podcast by Sheeri & Gerry, who did great coverage of some tools. I will disclose that more is to come; I’m happy this is in capable hands and look further to hear the next episode!

 

]]>
https://shlomi-noach.github.io/blog/mysql/common_schema-openark-kit-in-the-media-dbhangops-oursql/feed 0 6393
opeark-kit revision 196 released https://shlomi-noach.github.io/blog/mysql/opeark-kit-revision-196-released https://shlomi-noach.github.io/blog/mysql/opeark-kit-revision-196-released#respond Tue, 07 May 2013 05:47:22 +0000 https://shlomi-noach.github.io/blog/?p=6335 This is a long due maintenance release of openark-kit. This release includes bugfixes and some enhancements, mainly to oak-online-alter-table.

oak-online-alter-table Changes / bug fixes include:

  • Support for keyword-named columns
  • Use of FORCE INDEX due to lack of MySQL’s ability for figure out the chunking key at all times
  • –sleep-ratio option added; allows for sleep time proportional to execution time (as opposed to constant sleep time with –sleep)
  • Support for chunk-retry (in case of deadlock) via –max-lock-retries)
  • Fixed order of cleanup
  • Fixed bug with verbose messages with non-integer chunking key
  • Fixed bug with single-row tables (people, no need for this tool for single row tables :))
  • Friendly verbose messages to remind you what’s being executed

oak-chunk-update changes includes:

  • Verbosing query comment if exists (friendly printing of what’s being executed)

(Do check out QueryScript’s split(); it’s a simple, server side solution which works almost same way as oak-chunk-update)

More issues and changes not listed here.

Download

openark-kit is released under the new BSD license, and is freely available.

]]>
https://shlomi-noach.github.io/blog/mysql/opeark-kit-revision-196-released/feed 0 6335
Speaking at Percona Live 2013: common_schema, lightning talks https://shlomi-noach.github.io/blog/mysql/speaking-at-percona-live-2013-common_schema-lightning-talks https://shlomi-noach.github.io/blog/mysql/speaking-at-percona-live-2013-common_schema-lightning-talks#comments Thu, 11 Apr 2013 05:17:04 +0000 https://shlomi-noach.github.io/blog/?p=6268 In two weeks time I will be giving these talks at Percona Live:

  • common_schema: DBA’s framework for MySQL: an introduction to common_schema, my evolving server side solutions project. This will be a revised version of the talk I gave at Percona Live London; I have felt some weaknesses during that talk, which I’ve thrown out, letting room for cool stuff. I will discuss common_schema‘s various views, interesting and useful routines, the power of QueryScript, and a brief intro to the newcomer rdebug, debugger and debugging API for MySQL. If you’re not familiar with common_schema, it’s a good time to pick up on what I (being most biased) consider to be your smart assistant to MySQL maintenance and administration!
  • The query which is the peak of my career: this is a 6 minute lightning talk. You’re bound to attend if you’re at the community reception (which you are), so I don’t need to do promotional. You already payed the ticket and the doors will be locked. No escapees.

As far as I’m concerned the conference can be closed down the moment I provide these two talks, and we can all go to the beach.

Wait, no, I will also be at the DotOrg Pavillion at the Exhibit Hall, where I present common_schema and openark-kit. Come by to hear more about these!

]]>
https://shlomi-noach.github.io/blog/mysql/speaking-at-percona-live-2013-common_schema-lightning-talks/feed 4 6268
common_schema over traditional scripts https://shlomi-noach.github.io/blog/mysql/common_schema-over-traditional-scripts https://shlomi-noach.github.io/blog/mysql/common_schema-over-traditional-scripts#comments Wed, 12 Dec 2012 11:55:44 +0000 https://shlomi-noach.github.io/blog/?p=5509 If you are familiar with both openark kit and common_schema, you’ll notice I’ve incorporated some functionality already working in openark kit into common_schema, essentially rewriting what used to be a Python script into SQL/QueryScript.

What was my reasoning for rewriting good code? I wish to explain that, and provide with a couple examples.

I’m generally interested in pushing as much functionality into the MySQL server. When using an external script, one:

  • Needs the right dependencies (OS, Perl/Python version, Perl/Python modules).
  • Needs to provide with connection params,
  • Needs to get acquainted with a lot of command line options,
  • Is limited by whatever command line options are provided.
  • Has to invoke that script (duh!) to get the work done.

This last bullet is not so trivial: it means you can’t work some operation with your favorite GUI client, because it has no notion of your Perl script; does not run on the same machine where your Python code resides; simply can’t run those scripts for you.

With server-side code, functionality is accessible via any client. You run your operation via a query (e.g. CALL some_procedure). That can be done from your GUI client, your command line client, your event scheduler, your cronjob, all equally. You only need access to your MySQL server, which is trivial.

Of course, server side scripting is limited. Some stuff simply can’t be written solely on server side. If you want to consult your replicating slave; gracefully take action on user’s Ctrl+C, send data over the web, you’ll have to do it with an external tool. There are actually a lot of surprising limitations to things one would assume are possible on server side. You may already know how frustrated I am by the fact one can hardly get info from SHOW commands.

But, when it works, it shines

Let’s review a couple examples. The first one is nearly trivial. The second less so.

Example: getting AUTO_INCREMENT “free space”

openark kit offers oak-show-limits. It’s a tool that tells you if any of your AUTO_INCREMENT columns are running out of space (and so you might want to ALTER that INT to BIGINT).

It’s a very simple Python script. It gets your MAX(auto_increment_column) FROM tables_with_auto_increment, and compares that MAX value to the column type. It pre-computes:

max_values['tinyint'] = 2**8
max_values['smallint'] = 2**16
max_values['mediumint'] = 2**24
max_values['int'] = 2**32
max_values['bigint'] = 2**64

takes care of SIGNED/UNSIGNED, and does the math. Why is this tool such a perfect candidate for replacement on server side? For two reasons.

First, It turns out it takes very little effort to build a query which does the same. In which case it is also easy to build a view which provides the same.

Second, there’s this thing with command line arguments. The openark tool provides with –threshold (only output those columns where capacity is larger than x%), –database (only scan given database), –table (only for tables matching name), –column (only for columns matching name).

I don’t like this. See, the above is essentially an extra layer for saying:

  • WHERE auto_increment_ratio >= x
  • WHERE table_schema = …
  • WHERE table_name = …
  • WHERE column_name = …

The command line arguments each take the role of some WHERE/AND condition.Wow, what a 1-1 mapping. How about if I wanted the results sorted in some specific order? I would have to add a command line argument for that! How about only listing the SIGNED columns? I would have to add a command line argument for that, too! How about showing top 10? Yes, another command line argument!

Some of the above can be solved via shell scripting (sort -k 3 -n, head -n 10, etc.). But, hey, we’re OK with SQL, aren’t we? Why add now these two extra layers? Get to know all the command line options, get to script it? I love scripting, but this is an abuse.

So it makes much more sense, in my opinion, to SELECT * FROM auto_increment_columns WHERE table_schema=’my_db’ AND auto_increment_ratio >= 0.8 ORDER BY auto_increment_ratio DESC LIMIT 10. It doesn’t require SQL-fu skills, just basic SQL skills which every DBA and DB user are expected to have. And it allows one to work from whatever environment one feels comfortable with. Heck, with your GUI editor you can probably get off with it by right-clicking and left-clicking your mouse buttons, never typing one character.

Example: blocking user accounts

The above mapped very easily to a query, and was just a read-only query. What if we had to modify data? oak-block-accounts is a tool which allows one to block grantees from logging in, then releasing them later on. common_schema offers sql_accounts and eval().

Let’s skip the command line arguments issue, as it is identical to the above. How should we best provide with “taking action” interface? A script would have no problem to first SELECT stuff, then UPDATE, or SET PASSWORD, or DROP etc. How easy is it to do the same on server side?

The immediate solution is to write a stored procedure to do that. I reject the idea. Why? Because the procedure would look like this:

PROCEDURE block_account(user VARCHAR(64), host VARCHAR(64), only_if_empty_password BOOL, ...);

Can you see where I’m getting at? Doing the above re-introduces command line options, this time disguised as procedure parameters. We would again have to list all available filtering methods, only this time things are worse: since stored procedures have no such notion as overloading, and change to the params will break compatibility. Once we introduce this routine, we’re stuck with it.

common_schema tries to stay away as far as it can from this pitfall. It presents another solution: the view solution. Just as with auto_increment_columns, SELECT your way to get the right rows. But this time, the result is a SQL query:

mysql> SELECT sql_block_account FROM sql_accounts WHERE USER = 'gromit';
+-------------------------------------------------------------------------------------+
| sql_block_account                                                                   |
+-------------------------------------------------------------------------------------+
| SET PASSWORD FOR 'gromit'@'localhost' = '752AA50E562A6B40DE87DF0FA69FACADD908EA32*' |
+-------------------------------------------------------------------------------------+

Do your own WHERE/AND combination in SQL. But, how to take action? Our view cannot take the actual action for us!

eval() is at the core of many common_schema operations, like this one:

CALL eval("SELECT sql_block_account FROM sql_accounts WHERE USER = 'gromit'");

The SET PASSWORD query just got evaluated. Meaning it was executed. eval() is a very powerful solution.

Conclusion

I prefer stuff on server side. It requires basic SQL skills (or a smart GUI editor), and allows you easy access to a lot of functionality, removing dependency requirements. It is not always possible, and external scripts can do miracles not possible on server side, but server side scripting has its own miracles.

]]>
https://shlomi-noach.github.io/blog/mysql/common_schema-over-traditional-scripts/feed 2 5509
State of InnDB Online DDL in MySQL 5.6.8-RC https://shlomi-noach.github.io/blog/mysql/state-of-inndb-online-ddl-in-mysql-5-6-8-rc https://shlomi-noach.github.io/blog/mysql/state-of-inndb-online-ddl-in-mysql-5-6-8-rc#comments Tue, 20 Nov 2012 09:49:14 +0000 https://shlomi-noach.github.io/blog/?p=5823 5.6.8-rc is out, and so I’m following up on InnoDB’s online DDL new feature: the ability to SELECT, INSERT, DELETE, UPDATE a table even while an ALTER TABLE is executing on same table.

The brief summary

Not as advertised; many things can’t be done.

The longer review

I’m using 5.6.8-rc 64bit binary distribution for Linux, installed via mysqlsandbox. My hardware is irrelevant, but the fact I’m testing on my laptop assists me in that ALTER TABLE operations take a while, so that I’m able to easily type commands in two terminals and have the time to watch them being executed. Query cache is disabled.

I’m using the sakila sample database, and in particular I’m working with the rental table. Here’s the table definition:

CREATE TABLE `rental` (
  `rental_id` int(11) NOT NULL AUTO_INCREMENT,
  `rental_date` datetime NOT NULL,
  `inventory_id` mediumint(8) unsigned NOT NULL,
  `customer_id` smallint(5) unsigned NOT NULL,
  `return_date` datetime DEFAULT NULL,
  `staff_id` tinyint(3) unsigned NOT NULL,
  `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`rental_id`),
  UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`),
  KEY `idx_fk_inventory_id` (`inventory_id`),
  KEY `idx_fk_customer_id` (`customer_id`),
  KEY `idx_fk_staff_id` (`staff_id`),
  CONSTRAINT `fk_rental_staff` FOREIGN KEY (`staff_id`) REFERENCES `staff` (`staff_id`) ON UPDATE CASCADE,
  CONSTRAINT `fk_rental_inventory` FOREIGN KEY (`inventory_id`) REFERENCES `inventory` (`inventory_id`) ON UPDATE CASCADE,
  CONSTRAINT `fk_rental_customer` FOREIGN KEY (`customer_id`) REFERENCES `customer` (`customer_id`) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=16050 DEFAULT CHARSET=utf8

Highlights for the table: AUTO_INCREMENT PRIMARY KEY, some columns indexed, some not, and Foreign Keys in place. Pretty much a standard table. It contains 16,044 rows. Row format is COMPACT.

What I want to know is: which DDL commands allow for which online DML commands?

So, on terminal #1 I will issue queries like:

node1 5.6.8-rc-log sakila> alter table sakila.rental ROW_FORMAT=COMPACT /* or whatever */;
Query OK, 0 rows affected (10.57 sec)
Records: 0  Duplicates: 0  Warnings: 0

And during the above operation, I will execute the following on terminal #2:

  1. select max(rental_id) from sakila.rental; this queries the AUTO_INCREMENT value, which is of course a PRIMARY KEY
  2. select min(rental_date) from sakila.rental; there is an index on rental_date, and normal execution plan is to optimize table away and just use the index
  3. select min(return_date) from sakila.rental; there is no index on return_date, and full table scan is required
  4. update rental set return_date = return_date + interval 1 second where rental_id=3; the UPDATE uses the PRIMARY KEY
  5. update rental set return_date = return_date + interval 1 second where return_date = NOW(); won’t actually affect anything, but requires full scan.

So here are the results:

+-------------------------------------------------------------+-------+---------------------------+---------------+---------------------+-------------------------+--------------+---------------------+
| ALTER statement                                             | Time  | General comments          | select max PK | select min by index | select min by full scan | update by PK | update by full scan |
+-------------------------------------------------------------+-------+---------------------------+---------------+---------------------+-------------------------+--------------+---------------------+
| ROW_FORMAT=COMPACT                                          | 10.92 |                           | Instant       | Instant             | Instant                 | blocked      | blocked             |
| AUTO_INCREMENT=16051                                        |  0.06 | Instant, no table rebuild | N/A           | N/A                 | N/A                     | N/A          | N/A                 |
| ADD INDEX(last_update)                                      |  2.37 |                           | blocked       | blocked             | blocked                 | blocked      | blocked             |
| ADD INDEX(last_update), ALGORITHM=INPLACE                   |  1.83 |                           | blocked       | blocked             | blocked                 | blocked      | blocked             |
| ADD INDEX(last_update), ALGORITHM=INPLACE, LOCK=NONE        |  0.00 | ERROR 1235 (42000): ...   | N/A           | N/A                 | N/A                     | N/A          | N/A                 |
| ADD COLUMN c CHAR(1) NOT NULL                               | 11.20 |                           | Instant       | Instant             | Instant                 | blocked      | blocked             |
| ADD COLUMN c CHAR(1) NOT NULL, ALGORITHM=INPLACE, LOCK=NONE |  0.00 | ERROR 1235 (42000): .     | N/A           | N/A                 | N/A                     | N/A          | N/A                 |
+-------------------------------------------------------------+-------+---------------------------+---------------+---------------------+-------------------------+--------------+---------------------+

Rather surprising, I would say.

  • None of my tests resolved with online write (UPDATE). At best I could get online read (SEELCT).
  • AUTO_INCREMENT is instantaneous. High time for that! It’s just some number in the .frm file, never understood the need for table rebuild.
  • Apparently ADD COLUMN is more online than ADD INDEX, and I’ve tested this again and again and again to make sure I was doing it right. This is quite weird, even according to the docs.
  • In none of the above tests (and others, non listed), have I been able to specify LOCK=NONE. It’s always ERROR 1235 (42000): This version of MySQL doesn’t yet support ‘alter table sakila.rental <whatever>, algorithm=inplace, lock=none’.

So what’s so online about this? Online reads are nice, but most everyone cannot accept blocking writes (for same reason no one would use mysqlhotcopy, also so wrongly named). This leaves us again with oak-online-alter-table and pt-online-schema-change.

The butler did it

Apologies to the butler, the FOREIGN KEYs did it. Let’s try the same again without foreign keys:

node1 5.6.8-rc-log sakila> create table rental2 like rental;
node1 5.6.8-rc-log sakila> insert into rental2 select * from rental;
node1 5.6.8-rc-log sakila> rename table rental to rental_old, rental2 to rental;
Query OK, 0 rows affected (0.31 sec)

Here are the results:

+-------------------------------------------------------------+-------+---------------------------+----------------+---------------------+-------------------------+----------------+---------------------+
| ALTER statement                                             | Time  | General comments          | select max PK  | select min by index | select min by full scan | update by PK   | update by full scan |
+-------------------------------------------------------------+-------+---------------------------+----------------+---------------------+-------------------------+----------------+---------------------+
| ROW_FORMAT=COMPACT                                          | 11.03 |                           | Instant        | Instant             | Instant                 | Instant        | Instant             |
| AUTO_INCREMENT=16051                                        |  0.05 | Instant, no table rebuild | N/A            | N/A                 | N/A                     | N/A            | N/A                 |
| ADD INDEX(last_update)                                      |  2.04 |                           | * Inconsistent | * Inconsistent      | * Inconsistent          | blocked        | blocked             |
| ADD INDEX(last_update), ALGORITHM=INPLACE, LOCK=NONE        |  3.14 |                           | * Inconsistent | * Inconsistent      | * Inconsistent          | blocked        | blocked             |
| ADD COLUMN c CHAR(1) NOT NULL                               |    ** |                           | * Inconsistent | * Inconsistent      | * Inconsistent          | * Inconsistent | * Inconsistent      |
| ADD COLUMN c CHAR(1) NOT NULL, ALGORITHM=INPLACE, LOCK=NONE |    ** |                           | * Inconsistent | * Inconsistent      | * Inconsistent          | * Inconsistent | * Inconsistent      |
+-------------------------------------------------------------+-------+---------------------------+----------------+---------------------+-------------------------+----------------+---------------------+

What’s going on here?

  • ALGORITHM=INPLACE, LOCK=NONE is accepted! Bad, bad foreign keys!
  • * ADD INDEX usually allows for concurrent reads, but after repeated tests SELECTs start to block. Then they don’t work concurrently anymore until table is recreated. But even that not always, so I’m not sure what the inconsistency is.
  • * ADD COLUMN is still more concurrent than ADD INDEX, and actually allows for concurrent writes! Though, inconsistently. Sometimes it does not allow for concurrent writes.
  • ** ADD COLUMN runtime highly affected by concurrent queries. It wents as high as 45 seconds on my laptop. Now, to make things clear, I’m not running an automated benchmark here: I’m copying+pasting the statements from my editor to the mysql CLI. So, maybe 10 or 15SELECT and UPDATE queries executes. How does that justify 35 seconds delay in table rebuild?

Some conclusions:

  • The documentation does not specify anything about FOREIGN KEYs crashing the party. It should.
  • The documentation specifically mentions the ADD/DROP INDEX statements to be online. ADD INDEX is less online than ADD COLUMN.
  • Everything is still shaky. Sometimes things work, sometimes they don’t.
  • Runtimes are unproportionally affected by concurrent queries.
  • For the meantime, I keep to my online alter table scripts. Been using them for 3.5 years now.
]]>
https://shlomi-noach.github.io/blog/mysql/state-of-inndb-online-ddl-in-mysql-5-6-8-rc/feed 1 5823
Experimenting with 5.6 InnoDB Online DDL (bugs included) https://shlomi-noach.github.io/blog/mysql/experimenting-with-5-6-innodb-online-ddl-bugs-included https://shlomi-noach.github.io/blog/mysql/experimenting-with-5-6-innodb-online-ddl-bugs-included#comments Thu, 18 Oct 2012 12:41:46 +0000 https://shlomi-noach.github.io/blog/?p=5673 MySQL 5.6 offers the groundbreaking online DDL operations for InnoDB. Most common use cases will enjoy this feature, and the need for online alter table scripts will decrease. This is a killer feature!

I’ve put this new feature to the usability test. How did it go? Not too well, I’m afraid.

[Updates to this text inline], also see this followup.

sakila & DDL

sakila is still a very useful database. I say “still” because it is not very large, and computing power is getting stronger; yet on my laptop some operations can still take many seconds to complete, which is just fine for my tests.

Sakila tables are mostly InnoDB, and rental being the largest, I do:

node1 (sakila) > alter table sakila.rental engine=InnoDB;
Query OK, 16044 rows affected (6.94 sec)
Records: 16044  Duplicates: 0  Warnings: 0

So what can be executed during these 6.94 seconds? In a second terminal, I try the following:

Meta

node1 (sakila) > show create table sakila.rental\G
*************************** 1. row ***************************
       Table: rental
Create Table: CREATE TABLE `rental` (
  `rental_id` int(11) NOT NULL AUTO_INCREMENT,
  `rental_date` datetime NOT NULL,
  `inventory_id` mediumint(8) unsigned NOT NULL,
  `customer_id` smallint(5) unsigned NOT NULL,
  `return_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `staff_id` tinyint(3) unsigned NOT NULL,
  `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`rental_id`),
  UNIQUE KEY `rental_date` (`rental_date`,`inventory_id`,`customer_id`),
  KEY `idx_fk_inventory_id` (`inventory_id`),
  KEY `idx_fk_customer_id` (`customer_id`),
  KEY `idx_fk_staff_id` (`staff_id`),
  CONSTRAINT `fk_rental_customer` FOREIGN KEY (`customer_id`) REFERENCES `customer` (`customer_id`) ON UPDATE CASCADE,
  CONSTRAINT `fk_rental_inventory` FOREIGN KEY (`inventory_id`) REFERENCES `inventory` (`inventory_id`) ON UPDATE CASCADE,
  CONSTRAINT `fk_rental_staff` FOREIGN KEY (`staff_id`) REFERENCES `staff` (`staff_id`) ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=16050 DEFAULT CHARSET=utf8
1 row in set (1.08 sec)

1.08 seconds for SHOW CREATE TABLE. Consider: up till 5.5 you can’t run SHOW CREATE TABLE while an ALTER was running on that table.

Read

While ALTER TABLE runs, I execute:

node1 (sakila) > select min(rental_date), max(return_date) from sakila.rental;
+---------------------+---------------------+
| min(rental_date)    | max(return_date)    |
+---------------------+---------------------+
| 2005-05-24 22:53:30 | 2005-09-02 02:35:22 |
+---------------------+---------------------+
1 row in set (2.77 sec)

So 2.77 seconds for a query which uses a full table scan to return. I’m not measuring performance here; am satisfies that query did actually succeed even while table was being altered.

Read & bug

But, unfortunately, being the type of geek who likes to make trouble, I am also able to consistently fail the ALTER TABLE. Hang it, actually:

See session #1:

node1 (sakila) > alter table sakila.rental engine=innodb; 

... (waiting forever)

And session #2:

node1 (sakila) > show processlist;
+----+----------+-----------+--------+---------+------+---------------------------------+-----------------------------------------+
| Id | User     | Host      | db     | Command | Time | State                           | Info                                    |
+----+----------+-----------+--------+---------+------+---------------------------------+-----------------------------------------+
|  6 | msandbox | localhost | sakila | Query   |  219 | Waiting for table metadata lock | alter table sakila.rental engine=innodb |
|  4 | msandbox | localhost | sakila | Query   |    0 | init                            | show processlist                        |
+----+----------+-----------+--------+---------+------+---------------------------------+-----------------------------------------+

Read all about it in bug report #67286 .

Write: not so simple

The following UPDATE query hangs till the ALTER process is over:

node1 (sakila) > update sakila.rental set return_date=now() where rental_id = floor(rand()*100);
Query OK, 3 rows affected, 1 warning (6.10 sec)

No online DDL for writes?

Was I unfair? Is “ENGINE=InnoDB” really an online DDL operation? OK, let’s try with:

alter table sakila.rental row_format=compact;

Which is documented as one of the supported online DDL operations. Same.

The manual says I can define the ALGORITHM and the LOCK properties for the ALTER TABLE operation. But is gives no example, so I try my own:

node1 (sakila) > alter table sakila.rental row_format=compact ALGORITHM=INPLACE LOCK=NONE;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALGORITHM=INPLACE LOCK=NONE' at line 1

Ummm…. then maybe:

node1 (sakila) > alter table sakila.rental ALGORITHM=INPLACE LOCK=NONE row_format=compact;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LOCK=NONE row_format=compact' at line 1

OK, how about:

node1 (sakila) > alter table sakila.rental ALGORITHM=INPLACE row_format=compact LOCK=NONE;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'row_format=compact LOCK=NONE' at line 1

Reading, rereading, re-verifying the manual — I am typing a valid statement! What’s wrong here?

Yes, I’m on 5.6.7-rc-log. No, I can’t find, in 5.6 documentation and slides from MySQL connect, any code sample that actually uses ALGORITHM and LOCK (!?)

[UPDATE], as Marc Alff point out, I did in fact use the wrong syntax, and was missing commas. The right syntax is:

node1 (sakila) > alter table sakila.rental row_format=compact, algorithm=inplace, lock=none;
ERROR 1235 (42000): This version of MySQL doesn't yet support 'alter table sakila.rental row_format=compact, algorithm=inplace, lock=none'

Unfortunately this still results with an error. Another attempt shows that:

node1 (sakila) > alter table sakila.rental row_format=compact, algorithm=inplace, lock=shared;
Query OK, 0 rows affected (11.08 sec)

works well. So, apparently, you can only run this type of ALTER TABLE a with a SHARED lock. The bad news?

node1 (sakila) > alter table sakila.rental add index(return_date), algorithm=inplace, lock=none;
ERROR 1235 (42000): This version of MySQL doesn't yet support 'alter table sakila.rental add index(return_date), algorithm=inplace, lock=none'
node1 (sakila) > alter table sakila.rental add column c char, algorithm=inplace, lock=none;
ERROR 1235 (42000): This version of MySQL doesn't yet support 'alter table sakila.rental add column c char, algorithm=inplace, lock=none'

So I’m not sure as yet what kind of DDL operations are available with LOCK=NONE.

Conclusion

Little success with online DDL. SHARED-only is many times as good as completely blocked.

My personal conclusion is (and I do take into account 5.6 is RC at this time, not GA): not there yet! Stick to openark-kit, Percona-toolkit or Facebook OSC for some time. They all provide with online-alter-table operations via external scripts.

]]>
https://shlomi-noach.github.io/blog/mysql/experimenting-with-5-6-innodb-online-ddl-bugs-included/feed 8 5673
How common_schema split()s tables – internals https://shlomi-noach.github.io/blog/mysql/how-common_schema-splits-tables-internals https://shlomi-noach.github.io/blog/mysql/how-common_schema-splits-tables-internals#comments Thu, 06 Sep 2012 05:25:07 +0000 https://shlomi-noach.github.io/blog/?p=5035 This post exposes some of the internals, and the SQL behind QueryScript’s split. common_schema/QueryScript 1.1 introduces the split statement, which auto-breaks a “large” query (one which operates on large tables as a whole or without keys) into smaller queries, and executes them in sequence.

This makes for easier transactions, less locks held, potentially (depending on the user) more idle time released back to the database. split has similar concepts to oak-chunk-update and pt-archiver, but works differently, and implemented entirely in SQL on server side.

Take the following statement as example:

split (UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR)
  pass;

It yields with (roughly) the following statements:

UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '1')) OR ((`inventory`.`inventory_id` = '1'))) AND (((`inventory`.`inventory_id` < '1000')) OR ((`inventory`.`inventory_id` = '1000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '1000'))) AND (((`inventory`.`inventory_id` < '2000')) OR ((`inventory`.`inventory_id` = '2000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '2000'))) AND (((`inventory`.`inventory_id` < '3000')) OR ((`inventory`.`inventory_id` = '3000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '3000'))) AND (((`inventory`.`inventory_id` < '4000')) OR ((`inventory`.`inventory_id` = '4000'))));
UPDATE sakila.inventory SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`inventory`.`inventory_id` > '4000'))) AND (((`inventory`.`inventory_id` < '4581')) OR ((`inventory`.`inventory_id` = '4581'))));

(I say “roughly” because internally there are user defined variables at play, but for convenience, I verbose the actual values as constants.)

How does that work?

common_schema works on server side. There is no Perl script or anything. It must therefore use server-side operations to:

  • Identify table to be split
  • Analyze the table in the first place, deciding how to split it
  • Analyze the query, deciding on how to rewrite it
  • Split the table (logically) into unique and distinct chunks
  • Work out the query on each such chunk

Following is an internal look at how common_schema does all the above.

Identifying the table

When query operates on a single table, split is able to parse the query’s SQL and find out that table. When multiple tables are involved, split requires user instruction: which table is it that the query should be split by?

Analyzing the table

Table analysis is done via a similar method to candidate_keys_recommended. It is almost identical, only it uses INFORMATION_SCHEMA optimizations to make the query short and lightweight. Simulating the analysis using candidate_keys_recommended, we get:

mysql> select * from candidate_keys_recommended where table_name='inventory' \G
*************************** 1. row ***************************
          table_schema: sakila
            table_name: inventory
recommended_index_name: PRIMARY
          has_nullable: 0
            is_primary: 1
 count_column_in_index: 1
          column_names: inventory_id

This is cool, simple and very easy to work with: we choose to split the table via the inventory_id column, which is conveniently an integer. We’ll soon see split can handle complex cases as well.

Analyzing the query

This is done in part via Roland’s query_analysis_routines, and in part just parsing the query, looking for WHERE, GROUP BY, LIMIT etc. clauses.

The nice part is injecting a WHERE condition, which didn’t appear in the original query. That WHERE condition is what limits the query to a distinct chunk of rows.

Splitting the table

With a single INTEGER PRIMARY KEY this sounds simple, right? Take rows 1..1,000, then 1,001..2,000, then 2,001..3,000 etc.

Wrong: even with this simple scenario, things are much more complex. Are the numbers successive? What if there are holes? What if there is a 1,000,000 gap between every two numbers? What if there are multiple holes of differing size and frequency?

And if we have two columns in our UNIQUE KEY? What if one of them is textual, not an INTEGER, the other a TIMESTAMP, not an INTEGER either?

split doesn’t work in that naive way. It makes no assumptions on the density of values. It only requires:

  • some UNIQUE KEY to work with,
  • which has no NULL values.

Given the above, it uses User Defined Variables to setup the chunks. With our single INTEGER column, the minimum value is set like this:

select 
  inventory_id 
from 
  `sakila`.`inventory` 
order by 
  inventory_id ASC 
limit 1  
into @_split_column_variable_min_1
;

This sets the first value of the first chunk. What value terminates this chunk? It is calculated like this:

select 
  inventory_id 
from (
  select 
    inventory_id 
  from 
    `sakila`.`inventory` 
  where 
    (((`inventory`.`inventory_id` > @_split_column_variable_range_start_1)) OR ((`inventory`.`inventory_id` = @_split_column_variable_range_start_1))) and (((`inventory`.`inventory_id` < @_split_column_variable_max_1)) OR ((`inventory`.`inventory_id` = @_split_column_variable_max_1))) 
  order by 
    inventory_id ASC limit 1000 
  ) sel_split_range  
order by 
  inventory_id DESC 
limit 1  
into @_split_column_variable_range_end_1
;

Now there’s a query you wouldn’t want to work by hand, now would you?

The cool part here is that the above works well for any type of column; this doesn’t have to be an INTEGER. Dates, strings etc. are all just fine.

The above also works well for multiple columns, where the query gets more complicated (see following).

Working out the query per chunk

This part is the easy one, now that all the hard work is done. We know ho to manipulate the query, we know the lower and upper boundaries of the chunk, so we just fill in the values and execute.

Multi-columns keys

Consider a similar query on sakila.film_actor, where the PRIMARY KEY is a compound of two columns:

split (UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR)
  throttle 2;

The chunked queries will look like this:

UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`film_actor`.`actor_id` > '1')) OR ((`film_actor`.`actor_id` = '1') AND (`film_actor`.`film_id` > '1')) OR ((`film_actor`.`actor_id` = '1') AND (`film_actor`.`film_id` = '1'))) AND (((`film_actor`.`actor_id` < '39')) OR ((`film_actor`.`actor_id` = '39') AND (`film_actor`.`film_id` < '293')) OR ((`film_actor`.`actor_id` = '39') AND (`film_actor`.`film_id` = '293'))));
UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`film_actor`.`actor_id` > '39')) OR ((`film_actor`.`actor_id` = '39') AND (`film_actor`.`film_id` > '293'))) AND (((`film_actor`.`actor_id` < '76')) OR ((`film_actor`.`actor_id` = '76') AND (`film_actor`.`film_id` < '234')) OR ((`film_actor`.`actor_id` = '76') AND (`film_actor`.`film_id` = '234'))));
UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`film_actor`.`actor_id` > '76')) OR ((`film_actor`.`actor_id` = '76') AND (`film_actor`.`film_id` > '234'))) AND (((`film_actor`.`actor_id` < '110')) OR ((`film_actor`.`actor_id` = '110') AND (`film_actor`.`film_id` < '513')) OR ((`film_actor`.`actor_id` = '110') AND (`film_actor`.`film_id` = '513'))));
UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`film_actor`.`actor_id` > '110')) OR ((`film_actor`.`actor_id` = '110') AND (`film_actor`.`film_id` > '513'))) AND (((`film_actor`.`actor_id` < '146')) OR ((`film_actor`.`actor_id` = '146') AND (`film_actor`.`film_id` < '278')) OR ((`film_actor`.`actor_id` = '146') AND (`film_actor`.`film_id` = '278'))));
UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`film_actor`.`actor_id` > '146')) OR ((`film_actor`.`actor_id` = '146') AND (`film_actor`.`film_id` > '278'))) AND (((`film_actor`.`actor_id` < '183')) OR ((`film_actor`.`actor_id` = '183') AND (`film_actor`.`film_id` < '862')) OR ((`film_actor`.`actor_id` = '183') AND (`film_actor`.`film_id` = '862'))));
UPDATE sakila.film_actor SET last_update = last_update + INTERVAL 6 HOUR WHERE ((((`film_actor`.`actor_id` > '183')) OR ((`film_actor`.`actor_id` = '183') AND (`film_actor`.`film_id` > '862'))) AND (((`film_actor`.`actor_id` < '200')) OR ((`film_actor`.`actor_id` = '200') AND (`film_actor`.`film_id` < '993')) OR ((`film_actor`.`actor_id` = '200') AND (`film_actor`.`film_id` = '993'))));

View the complete command to realize just how much more complex each query is, and how much more complex the chunking becomes. Here’s how I evaluate the chunk’s “next range end” variables:

select 
  actor_id, film_id 
from (
  select 
    actor_id, film_id 
  from 
    `sakila`.`film_actor` 
  where 
    (((`film_actor`.`actor_id` > @_split_column_variable_range_start_1)) OR ((`film_actor`.
`actor_id` = @_split_column_variable_range_start_1) AND (`film_actor`.`film_id` > @_split_column_variable_range_start_2))) and (((`film_actor`.`actor_id` < @_split_column_variable_max_1)) OR ((`film_actor`.`actor_id` = @_split_column_variable_max_1) AND (`film_actor`.`film_id` < @_split_column_variable_max_2)) OR ((`film_actor`.`actor_id` = @_split_column_variable_max_1) AND (`film_actor`.`film_id` = @_split_column_variable_max_2))) 
  order by 
    actor_id ASC, film_id ASC 
  limit 1000 
  ) sel_split_range  
order by 
  actor_id DESC, film_id DESC 
limit 1  
into @_split_column_variable_range_end_1, @_split_column_variable_range_end_2
;

By the way, you may recall that everything is done server side. The WHERE condition for the chunked queries is in itself generated via SQL statement, and not too much by programmatic logic. Here’s part of the query which computes the limiting condition:

  select
    group_concat('(', partial_comparison, ')' order by n separator ' OR ') as comparison
  from (
    select 
      n,
      group_concat('(', column_name, ' ', if(is_last, comparison_operator, '='), ' ', variable_name, ')' order by column_order separator ' AND ') as partial_comparison
    from (
      select 
        n, CONCAT(mysql_qualify(split_table_name), '.', mysql_qualify(column_name)) AS column_name,
        case split_variable_type
          when 'range_start' then range_start_variable_name
          when 'range_end' then range_end_variable_name
          when 'max' then max_variable_name
        end as variable_name,
        _split_column_names_table.column_order, _split_column_names_table.column_order = n as is_last 
      from 
        numbers, _split_column_names_table 
      where 
        n between _split_column_names_table.column_order and num_split_columns 
      order by n, _split_column_names_table.column_order
    ) s1
    group by n
  ) s2
  into return_value
  ;

There is a lot of complexity to split to make it able to provide with as clean a syntax for the user as possible.

]]>
https://shlomi-noach.github.io/blog/mysql/how-common_schema-splits-tables-internals/feed 5 5035
DELETE, don’t INSERT https://shlomi-noach.github.io/blog/mysql/delete-dont-insert https://shlomi-noach.github.io/blog/mysql/delete-dont-insert#comments Wed, 27 Jun 2012 05:25:09 +0000 https://shlomi-noach.github.io/blog/?p=5008 Have just read INSERT, Don’t DELETE by Aaron Brown, and have some lengthy response, which is why I write this post instead of commenting on said post.

I wish to offer my counter thought and suggest that DELETEs are probably the better choice.

Aaron suggests that, when one wishes to purge rows from some table, a trick can be used: instead of DELETEing unwanted rows, one can INSERT “good” rows into a new table, then switch over with RENAME (but please read referenced post for complete details).

I respectfully disagree on several points discussed.

Lockdown

The fact one needs to block writes during the time of creation of new table is problematic: you need to essentially turn off parts of your application. The posts suggests one could use a slave – but this solution is far from being trivial as well. To switch over, you yet again need to turn off access to DB, even if for a short while.

A switch over to a slave is quite a big deal, in my opinion, for the mere purpose of deletion of rows.

DELETEs are easy

The DELETEs are so much easier: the first thing to note is the following: You don’t actually have to delete all the rows *at once*.

You just need to drop some rows, right? Why waste a huge transaction that takes minutes, when you can drop the rows by chunks, one at a time?
For that, you can use either pt-archive from Percona Toolkit, oak-chunk-update from openark-kit, or write a simple QueryScript code with common_schema:

while (DELETE FROM title WHERE title <= 'g' LIMIT 1000)
{
  throttle 1;
}

So, drop 1,000 rows or so at a time, then sleep some time, etc. The total runtime is longer, but who cares? The impact can be reduced to be unnoticeable.

Space reclaim

You can use online table operations to rebuild your table and reclaim the disk space. Either see oak-online-alter-table or pt-online-schema-change. Again, both work in small chunks, so no long stalls.

But more on this: my usual purge scenario shows that it is repetitive. You purge, data fills again, you purge again, and so on.

Which is why it doesn’t make much sense to rebuild the table and reclaim the disk space: it just grows again to roughly same dimensions.
For a one time operation (e.g. after neglect of cleanup for long time) — yes, absolutely, do a rebuild and reclaim. For repetitive cleanup – I don’t bother.

Conclusion

Aaron does make note at the end of his post that DELETE operations can be done online, while the INSERT trick requires downtime, and this is a fair assessment.

But just to make a point: none of the DELETE timings are interesting. Since we are not concerned with deleting the rows in a given time (no “press the red button”), we can spread them over time and make the impact negligible. So not only is everything done online, it also goes unnoticed by the user. And this, I believe, is the major thing to consider.

]]>
https://shlomi-noach.github.io/blog/mysql/delete-dont-insert/feed 6 5008
Webinar review: Zero-Downtime Schema Changes In MySQL https://shlomi-noach.github.io/blog/mysql/webinar-review-zero-downtime-schema-changes-in-mysql https://shlomi-noach.github.io/blog/mysql/webinar-review-zero-downtime-schema-changes-in-mysql#comments Thu, 03 May 2012 14:17:19 +0000 https://shlomi-noach.github.io/blog/?p=4895 Yesterday I attended the Zero-Downtime Schema Changes In MySQL webinar by Baron Schwartz, Percona (do you say “attended” for something you listened to from your home office?)

I was keen to learn about possible enhancements and improvements of pt-online-schema-change over oak-online-alter-table. Here are my impressions:

The base logic of pt-online-schema-change is essentially the same as of oak-online-alter-table. You create a ghost/shadow table, create complex triggers, copy in chunks, freeze and swap. Both work on any type of PRIMARY KEY (oak-online-alter-table can work with any UNIQUE KEY, I’m not sure about pt-online-schema-change on this), be it an INTEGER, other type, or a multi column one.

However, pt-online-schema-change also adds the following:

  • It supports FOREIGN KEYs (to some extent). This is something I’ve wanted to do with oak-online-alter-table but never got around to it. Foreign keys are very tricky, as Baron noted. With child-side keys, things are reasonably manageable. With parent-side this becomes a nightmare, sometimes unsolvable (when I say “unsolvable”, I mean that under the constraint of having the operation run in a non-blocking, transparent way).
  • Chunk size is auto-calculated by the script. This is a cool addition. Instead of letting the user throwing out numbers like 1,000 rows per chunk, in the hope that this is neither too small nor too large, the tool monitors the time it takes a chunk to complete, then adjusts the size of next chunk accordingly. Hopefully this leads to a more optimized run, where locks are only held for very short periods, yet enough rows are being processed at a time.
  • The tool looks into replicating slaves to verify they’re up to the job. If the slave lags too far, the tool slows down the work. This is an excellent feature, and again, one that I always wanted to have. Great work!

So the three bullets above are what I understand to be the major advantages of Percona’s tool over oak-online-alter-table.

Q & A

The presentation itself was very good, and Baron answered some questions. There was one question he did not answer during the webinar, nor here, and I though I may pop in and answer it. Although I can’t speak for the coders of pt-online-schema-change, I safely assume that since the logic follows that of oak-online-alter-table, the same answer applies in the case of Percona’s toolkit.

But, first, a background question (asked and answered during the webinar):

Q: What if my table already has AFTER TRIGGERs?

A: Then this can’t work out. The table must not have triggers.

Which led to the next question:

Q: Can’t the tool use BEFORE TRIGGERs instead?

Imagine a MyISAM table being altered to InnoDB (this is a major task for which my tool was built). Suppose we used a BEFORE trigger on an INSERT, but the INSERT failed. That would make the shadow table inconsistent with the original table. Which is the reason why the trigger must be an AFTER trigger.

With InnoDB this should not be an issue, since triggers and actions all play within the same transaction, so all succeed or all fail. I have this nagging feeling at the back of my head which says I’ve already had thoughts on this and have found a problem with InnoDB tables as well. I can’t put my finger on it now, so no comment on this one at this stage.

]]>
https://shlomi-noach.github.io/blog/mysql/webinar-review-zero-downtime-schema-changes-in-mysql/feed 1 4895
Documentation in SQL: CALL for help() https://shlomi-noach.github.io/blog/mysql/documentation-in-sql-call-for-help https://shlomi-noach.github.io/blog/mysql/documentation-in-sql-call-for-help#comments Wed, 11 Jan 2012 07:01:54 +0000 https://shlomi-noach.github.io/blog/?p=4536 Documentation is an important part of any project. On the projects I maintain I put a lot of effort on documentation, and, frankly, the majority of time spent on my projects is on documentation.

The matter of keeping the documentation faithful is a topic of interest. I’d like to outline a few documentation bundling possibilities, and the present the coming new documentation method for common_schema. I’ll talk about any bundling that is NOT man pages.

High level: web docs

This is the initial method of documentation I used for openark kit and mycheckpoint. It’s still valid for mycheckpoint. Documentation is web-based. You need Internet access to read it. It’s in HTML format.

Well, not exactly HTML format: I wrote it in WordPress. Yes, it’s HTML, but there’s a lot of noise around (theme, menus, etc.) which is not strictly part of the documentation.

While this is perhaps the easiest way to go, here’s a few drawbacks:

  • You’re bound to some framework (WordPress in this case)
  • Docs are split between MySQL database (my underlying WordPRess storage) & WordPress files (themes, style, header, footer etc.)
  • Documentation is separate from your code – they’re just not in the same place
  • There is no version control over the documentation.

The result is a single source of documentation, which applies to whatever version is latest. It’s impossible to maintain docs for multiple versions. You must manually synchronize your WordPress updates with code commits (or rather – code release!).

Mid level: version controlled HTML docs

I first saw this approach on Baron’s Aspersa gets a user manual post. I loved it: the documentation is HTML, but stored as part of your project’s code, in same version control.

This means one can browse the documentation (openark kit in this example) exactly as it appears in the baseline. Depending on your project hosting, one may be able to do so per version.

The approach has the great benefit of having the docs tightly coupled with the code in terms of development. Before committing code, one updates documentation for that code, then commits/releases both together.

You’re also not bound to any development framework. You may edit with vim, emacs, gedit, bluefish, eclipse, … any tool of your choice. It’s all down to plain old text files.

Mid level #2: documentation bundling

One thing I started doing with common_schema is to release a doc bundle with the code. So one can download a compressed bundle of all HTML files. That way one is absolutely certain what’s the right documentation for revision 178. There’s no effort about it: the docs are already tightly coupled with code versions. Just compress and distribute.

Low level: documentation coupled with your code

Perl scripts can be written as Perl modules, in which case they are eligible for using the perldoc convention. You code your documentation within your script itself, as comment. Perldoc can extract the documentation and present in man-like format. Same happens with Python’s pydoc. Baron’s When documentation is code illustrates that approach. Maatkit (now Percona Toolkit) has been using it for years.

This method has the advantage of having the documentation ready right within your shell. You don’t need a browser, nor firewall access. The docs are just there for you in the same environment where you’re executing the code.

SQL Low level: CALL for help()

common_schema is a different type of project. It is merely a schema. There’s no Perl nor Python. One imports the schema into one’s MySQL server.

What’s the low-level approach for this type of code?

For common_schema I use three levels of documentation: the mid-level, where one can browse through the versioned docs, the 2nd mid-level, where one can download bundled documentation, and then a low-level approach: documentation embedded within the code.

MySQL’s documentation is also built into the server: see the help_* tables within the mysql schema. The mysql command line client allows one to access help by supporting the help command, e.g.

mysql> help create table;

The client intercepts this command (this is not server side command) and searches through the mysql.help_* docs.

With common_schema, I don’t have control over the client; it’s all on server side. But the code being a schema, what with stored routines and tables, it’s easy enough to set up documentation.

As of the next version of common_schema, and following MySQL’s method, common_schema provides a help table:

DESC help;
+--------------+-------------+------+-----+---------+-------+
| Field        | Type        | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| topic        | varchar(32) | NO   | PRI | NULL    |       |
| help_message | text        | NO   |     | NULL    |       |
+--------------+-------------+------+-----+---------+-------+

And a help() procedure, so that you can call for help(). The procedure will look for the best matching document based on your search expression:

root@mysql-5.1.51> CALL help('match');
+-------------------------------------------------------------------------------+
| help                                                                          |
+-------------------------------------------------------------------------------+
|                                                                               |
| NAME                                                                          |
|                                                                               |
| match_grantee(): Match an existing account based on user+host.                |
|                                                                               |
| TYPE                                                                          |
|                                                                               |
| Function                                                                      |
|                                                                               |
| DESCRIPTION                                                                   |
|                                                                               |
| MySQL does not provide with identification of logged in accounts. It only     |
| provides with user + host:port combination within processlist. Alas, these do |
| not directly map to accounts, as MySQL lists the host:port from which the     |
| connection is made, but not the (possibly wildcard) user or host.             |
| This function matches a user+host combination against the known accounts,     |
| using the same matching method as the MySQL server, to detect the account     |
| which MySQL identifies as the one matching. It is similar in essence to       |
| CURRENT_USER(), only it works for all sessions, not just for the current      |
| session.                                                                      |
|                                                                               |
| SYNOPSIS                                                                      |
|                                                                               |
|                                                                               |
|                                                                               |
|        match_grantee(connection_user char(16) CHARSET utf8,                   |
|        connection_host char(70) CHARSET utf8)                                 |
|          RETURNS VARCHAR(100) CHARSET utf8                                    |
|                                                                               |
|                                                                               |
| Input:                                                                        |
|                                                                               |
| * connection_user: user login (e.g. as specified by PROCESSLIST)              |
| * connection_host: login host. May optionally specify port number (e.g.       |
|   webhost:12345), which is discarded by the function. This is to support      |
|   immediate input from as specified by PROCESSLIST.                           |
|                                                                               |
|                                                                               |
| EXAMPLES                                                                      |
|                                                                               |
| Find an account matching the given use+host combination:                      |
|                                                                               |
|                                                                               |
|        mysql> SELECT match_grantee('apps', '192.128.0.1:12345') AS            |
|        grantee;                                                               |
|        +------------+                                                         |
|        | grantee    |                                                         |
|        +------------+                                                         |
|        | 'apps'@'%' |                                                         |
|        +------------+                                                         |
|                                                                               |
|                                                                               |
|                                                                               |
| ENVIRONMENT                                                                   |
|                                                                               |
| MySQL 5.1 or newer                                                            |
|                                                                               |
| SEE ALSO                                                                      |
|                                                                               |
| processlist_grantees                                                          |
|                                                                               |
| AUTHOR                                                                        |
|                                                                               |
| Shlomi Noach                                                                  |
|                                                                               |
+-------------------------------------------------------------------------------+

I like HTML for documentation. I think it’s a good format, provided you don’t start doing funny things. Perhaps TROFF is more suitable; certainly more popular on Unix machines. But I already have everything in HTML. So, what do I do?

My decision was to keep documentation in HTML, and use the handy html2text tool to do the job. And it does it pretty well! The sample you see above is an automated translation of HTML to plain text.

I add a few touches of my own: SELECTing long texts is ugly, whether you do it via “;” or “\G“. The help() routine breaks the text by ‘\n‘, returning a multi row result set. The above sample makes for some 60+ rows, nicely formatted, broken from the original single text appearing in the help table.

So now you have an internal help method for common_schema, right where the code is. You don’t have to leave the command line client in order to get help.

Giuseppe offered me the idea for this, even while my own thinking about it was in early stages.

The next version of common_schema will be available in a few weeks. The code is pretty much ready. I just need to work on, ahem…, the documentation.

]]>
https://shlomi-noach.github.io/blog/mysql/documentation-in-sql-call-for-help/feed 2 4536