Development – code.openark.org http://shlomi-noach.github.io/blog/ Blog by Shlomi Noach Tue, 20 Jun 2017 04:05:39 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.3 32412571 Observations on the hashicorp/raft library, and notes on RDBMS https://shlomi-noach.github.io/blog/mysql/observations-on-the-hashicorpraft-library-and-notes-on-rdbms https://shlomi-noach.github.io/blog/mysql/observations-on-the-hashicorpraft-library-and-notes-on-rdbms#comments Tue, 20 Jun 2017 04:05:39 +0000 https://shlomi-noach.github.io/blog/?p=7717 The hashicorp/raft library is a Go library to provide consensus via Raft protocol implementation. It is the underlying library behind Hashicorp’s Consul.

I’ve had the opportunity to work with this library a couple projects, namely freno and orchestrator. Here are a few observations on working with this library:

  • TL;DR on Raft: a group communication protocol; multiple nodes communicate, elect a leader. A leader leads a consensus (any subgroup of more than half the nodes of the original group, or hopefully all of them). Nodes may leave and rejoin, and will remain consistent with consensus.
  • The hashicorp/raft library is an implementation of the Raft protocol. There are other implementations, and different implementations support different features.
  • The most basic premise is leader election. This is pretty straightforward to implement; you set up nodes to communicate to each other, and they elect a leader. You may query for the leader identity via Leader(), VerifyLeader(), or observing LeaderCh.
  • You have no control over the identity of the leader. You cannot “prefer” one node to be the leader. You cannot grab leadership from an elected leader, and you cannot demote a leader unless by killing it.
  • The next premise is gossip, sending messages between the raft nodes. With hashicorp/raft, only the leader may send messages to the group. This is done via the Apply() function.
  • Messages are nothing but blobs. Your app encodes the messages into []byte and ships it via raft. Receiving ends need to decode the bytes into a meaningful message.
  • You will check the result of Apply(), an ApplyFuture. The call to Error() will wait for consensus.
  • Just what is a message consensus? It’s a guarantee that the consensus of nodes has received and registered the message.
  • Messages form the raft log.
  • Messages are guaranteed to be handled in-order across all nodes.
  • The leader is satisfied when the followers receive the messages/log, but it cares not for their interpretation of the log.
  • The leader does not collect the output, or return value, of the followers applying of the log.
  • Consequently, your followers may not abort the message. They may not cast an opinion. They must adhere to the instruction received from the leader.
  • hashicorp/raft uses either an LMDB-based store or BoltDB for persisting your messages. Both are transactional stores.
  • Messages are expected to be idempotent: a node that, say, happens to restart, will request to join back the consensus (or to form a consensus with some other node). To do that, it will have to reapply historical messages that it may have applied in the past.
  • Number of messages (log entries) will grow infinitely. Snapshots are taken so as to truncate the log history. You will implement the snapshot dump & load.
  • A snapshot includes the log index up to which it covers.
  • Upon startup, your node will look for the most recent snapshot. It will read it, then resume replication from the aforementioned log index.
  • hashicorp/raft provides a file-system based snapshot implementation.

One of my use cases is completely satisfied with the existing implementations of BoltDB and of the filesystem snapshot.

However in another (orchestrator), my app stores its state in a relational backend. To that effect, I’ve modified the logstore and snapshot store. I’m using either MySQL or sqlite as backend stores for my app. How does that affect my raft use?

  • My backend RDBMS is the de-facto state of my orchestrator app. Anything written to this DB is persisted and durable.
  • When orchestrator applies a raft log/message, it runs some app logic which ends with a write to the backend DB. At that time, the raft log is effectively not required anymore to persist. I care not for the history of logs.
  • Moreover, I care not for snapshotting. To elaborate, I care not for snapshot data. My backend RDBMS is the snapshot data.
  • Since I’m running a RDBMS, I find BoltDB to be wasteful, an additional transaction store on top a transaction store I already have.
  • Likewise, the filesystem snapshots are yet another form of store.
  • Log Store (including Stable Store) are easily re-implemented on top of RDBMS. The log is a classic relational entity.
  • Snapshot is also implemented on top of RDBMS,  however I only care for the snapshot metadata (what log entry is covered by a snapshot) and completely discard storing/loading snapshot state or content.
  • With all these in place, I have a single entity that defines:
    • What my data looks like
    • Where my node fares in the group gossip
  • A single RDBMS restore returns a dataset that will catch up with raft log correctly. However my restore window is limited by the number of snapshots I store and their frequency.
]]>
https://shlomi-noach.github.io/blog/mysql/observations-on-the-hashicorpraft-library-and-notes-on-rdbms/feed 1 7717
Forking Golang repositories on GitHub and managing the import path https://shlomi-noach.github.io/blog/development/forking-golang-repositories-on-github-and-managing-the-import-path https://shlomi-noach.github.io/blog/development/forking-golang-repositories-on-github-and-managing-the-import-path#comments Mon, 23 Nov 2015 12:22:34 +0000 https://shlomi-noach.github.io/blog/?p=7506 Problem: there’s an awesome Golang project on GitHub which you want to fork. You want to develop & collaborate on that fork, but the golang import path, in your source code, still references the original path, breaking everything.

A couple solutions offered below. First, though, let’s get some names.

A sample case, the problem at hand

There’s an awesome tool on http://github.com/awsome-org/tool. You successfully fork it onto http://github.com/awesome-you/tool.

You want to collaborate on http://github.com/awesome-you/tool; you wish to pull, commit & push. Maybe you want to send pull requests to the origin.

The following is commonly found throughout .go files in the repository:

import (
    "github.com/awesome-org/tool/config"
    "github.com/awesome-org/tool/driver"
    "github.com/awesome-org/tool/net"
    "github.com/awesome-org/tool/util"
)

If you:

go get http://github.com/awesome-you/tool

golang creates your $GOPATH/src/github.com/awesome-you/tool/, which is awesome. However, as you resolve dependencies via

cd $GOPATH/src/github.com/awesome-you/tool/ ; go get ./...

golang digs into the source code, finds references to github.com/awesome-org/tool/configgithub.com/awesome-org/tool/driver etc, and fetches those from http://github.com/awsome-org/tool and onto $GOPATH/src/github.com/awesome-org/tool/, which is not awesome. You actually have two copies of the code, one from your fork, one from the origin, and your own fork will be largely ignored as it mostly points back to the origin.

A bad solution

The dirty, bad solution would be for you to go over the source code and replace “github.com/awesome-org/tool” entries with “github.com/awesome-you/tool”. It is bad for two reasons:

  • You will not be able to further pull changes from upstream
  • You will not be able to pull-request and push your own changes upstream

When I say “You will not be able” I mean “in a reasonable, developer-friendly manner”. The code will be incompatible with upstream and you have effectively detached your code. You will need to keep editing and re-editing those entries anytime you wish to pull/push upstream.

Solution #1: add remote

Described in GitHub and Go: forking, pull requests, and go-getting, follow these procedures:

go get http://github.com/awesome-org/tool
git remote add awesome-you-fork http://github.com/awesome-you/tool

You’re adding your repository as remote. You will from now on need to explicitly:

git pull --rebase awesome-you-fork
git push awesome-you-fork

If you forget to add the “awesome-you-fork” argument, you are pulling and pushing from upstream.

Solution #2: cheat “go get”, DIY

The problem began with the go get command, which copied the URI path onto $GOPATH/src. However go get implicitly issues a git clone, and we can do the same ourselves. We will dirty our hands just once, and then benefit from an ambiguous-less environment.

We will now create our git repository in the name of awesome-org but with the contents of awesome-you:

cd $GOPATH
mkdir -p {src,bin,pkg}
mkdir -p src/github.com/awesome-org/
cd src/github.com/awesome-org/
git clone git@github.com:awesome-you/tool.git # OR: git clone https://github.com/awesome-you/tool.git
cd tool/
go get ./...

The mkdir -p {src,bin,pkg} is there just in case you do not have anything setup in your $GOPATH. We then create the repository path under the name of awesome-org, but once inside clone from awesome-you.

The source code’s import path fits your directory layout now, but as you push/pull you are only speaking to your own awesome-you repository.

]]>
https://shlomi-noach.github.io/blog/development/forking-golang-repositories-on-github-and-managing-the-import-path/feed 4 7506
zookeepercli: lightweight, powerful, controlled command line client for ZooKeeper https://shlomi-noach.github.io/blog/linux/zookeepercli-lightweight-powerful-controlled-command-line-client-for-zookeeper https://shlomi-noach.github.io/blog/linux/zookeepercli-lightweight-powerful-controlled-command-line-client-for-zookeeper#comments Wed, 17 Sep 2014 07:08:20 +0000 https://shlomi-noach.github.io/blog/?p=6980 I’m happy to announce the availability of zookeepercli: a lightweight, simple, fast and controlled command line client for ZooKeeper.

zookeepercli allows for:

  • Basic CRUD-like operations: createsetdeleteexistsgetls (aka children).
  • Extended operations: lsr (ls recursive),  creater (create recursively)
  • Well formatted and controlled output: supporting either txt or json format
  • Single, no-dependencies binary file, based on a native Go ZooKeeper library by github.com/samuel/go-zookeeper (LICENSE)

I was dissatisfied with existing command line access to ZooKeeper. Uncontrolled and noisy output as well as large footprint were among the reasons. zookeepercli overcomes the above and provides with often required powers.

Usage samples:


$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only "path placeholder"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key1 "value1"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key2 "value2"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key3 "value3"

$ zookeepercli --servers srv-1,srv-2,srv-3 -c ls /demo_only
key3
key2
key1

# Same as above, JSON format output:
$ zookeepercli --servers srv-1,srv-2,srv-3 --format=json -c ls /demo_only
["key3","key2","key1"]

$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key1
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key2
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key3
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only

# Create a path recursively (auto-generate parent directories if not exist):
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c creater "/demo_only/child/key1" "val1"
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c creater "/demo_only/child/key2" "val2"

$ zookeepercli --servers=srv-1,srv-2,srv-3 -c get "/demo_only/child/key1"
val1

# This path was auto generated due to recursive create:
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c get "/demo_only" 
zookeepercli auto-generated

# ls recursively a path and all sub children:
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c lsr "/demo_only" 
child
child/key1
child/key2 

zookeepercli is released as open source by Outbrain under the Apache 2.0 license.

Quick links:

]]>
https://shlomi-noach.github.io/blog/linux/zookeepercli-lightweight-powerful-controlled-command-line-client-for-zookeeper/feed 9 6980
Documentation in SQL: CALL for help() https://shlomi-noach.github.io/blog/mysql/documentation-in-sql-call-for-help https://shlomi-noach.github.io/blog/mysql/documentation-in-sql-call-for-help#comments Wed, 11 Jan 2012 07:01:54 +0000 https://shlomi-noach.github.io/blog/?p=4536 Documentation is an important part of any project. On the projects I maintain I put a lot of effort on documentation, and, frankly, the majority of time spent on my projects is on documentation.

The matter of keeping the documentation faithful is a topic of interest. I’d like to outline a few documentation bundling possibilities, and the present the coming new documentation method for common_schema. I’ll talk about any bundling that is NOT man pages.

High level: web docs

This is the initial method of documentation I used for openark kit and mycheckpoint. It’s still valid for mycheckpoint. Documentation is web-based. You need Internet access to read it. It’s in HTML format.

Well, not exactly HTML format: I wrote it in WordPress. Yes, it’s HTML, but there’s a lot of noise around (theme, menus, etc.) which is not strictly part of the documentation.

While this is perhaps the easiest way to go, here’s a few drawbacks:

  • You’re bound to some framework (WordPress in this case)
  • Docs are split between MySQL database (my underlying WordPRess storage) & WordPress files (themes, style, header, footer etc.)
  • Documentation is separate from your code – they’re just not in the same place
  • There is no version control over the documentation.

The result is a single source of documentation, which applies to whatever version is latest. It’s impossible to maintain docs for multiple versions. You must manually synchronize your WordPress updates with code commits (or rather – code release!).

Mid level: version controlled HTML docs

I first saw this approach on Baron’s Aspersa gets a user manual post. I loved it: the documentation is HTML, but stored as part of your project’s code, in same version control.

This means one can browse the documentation (openark kit in this example) exactly as it appears in the baseline. Depending on your project hosting, one may be able to do so per version.

The approach has the great benefit of having the docs tightly coupled with the code in terms of development. Before committing code, one updates documentation for that code, then commits/releases both together.

You’re also not bound to any development framework. You may edit with vim, emacs, gedit, bluefish, eclipse, … any tool of your choice. It’s all down to plain old text files.

Mid level #2: documentation bundling

One thing I started doing with common_schema is to release a doc bundle with the code. So one can download a compressed bundle of all HTML files. That way one is absolutely certain what’s the right documentation for revision 178. There’s no effort about it: the docs are already tightly coupled with code versions. Just compress and distribute.

Low level: documentation coupled with your code

Perl scripts can be written as Perl modules, in which case they are eligible for using the perldoc convention. You code your documentation within your script itself, as comment. Perldoc can extract the documentation and present in man-like format. Same happens with Python’s pydoc. Baron’s When documentation is code illustrates that approach. Maatkit (now Percona Toolkit) has been using it for years.

This method has the advantage of having the documentation ready right within your shell. You don’t need a browser, nor firewall access. The docs are just there for you in the same environment where you’re executing the code.

SQL Low level: CALL for help()

common_schema is a different type of project. It is merely a schema. There’s no Perl nor Python. One imports the schema into one’s MySQL server.

What’s the low-level approach for this type of code?

For common_schema I use three levels of documentation: the mid-level, where one can browse through the versioned docs, the 2nd mid-level, where one can download bundled documentation, and then a low-level approach: documentation embedded within the code.

MySQL’s documentation is also built into the server: see the help_* tables within the mysql schema. The mysql command line client allows one to access help by supporting the help command, e.g.

mysql> help create table;

The client intercepts this command (this is not server side command) and searches through the mysql.help_* docs.

With common_schema, I don’t have control over the client; it’s all on server side. But the code being a schema, what with stored routines and tables, it’s easy enough to set up documentation.

As of the next version of common_schema, and following MySQL’s method, common_schema provides a help table:

DESC help;
+--------------+-------------+------+-----+---------+-------+
| Field        | Type        | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| topic        | varchar(32) | NO   | PRI | NULL    |       |
| help_message | text        | NO   |     | NULL    |       |
+--------------+-------------+------+-----+---------+-------+

And a help() procedure, so that you can call for help(). The procedure will look for the best matching document based on your search expression:

root@mysql-5.1.51> CALL help('match');
+-------------------------------------------------------------------------------+
| help                                                                          |
+-------------------------------------------------------------------------------+
|                                                                               |
| NAME                                                                          |
|                                                                               |
| match_grantee(): Match an existing account based on user+host.                |
|                                                                               |
| TYPE                                                                          |
|                                                                               |
| Function                                                                      |
|                                                                               |
| DESCRIPTION                                                                   |
|                                                                               |
| MySQL does not provide with identification of logged in accounts. It only     |
| provides with user + host:port combination within processlist. Alas, these do |
| not directly map to accounts, as MySQL lists the host:port from which the     |
| connection is made, but not the (possibly wildcard) user or host.             |
| This function matches a user+host combination against the known accounts,     |
| using the same matching method as the MySQL server, to detect the account     |
| which MySQL identifies as the one matching. It is similar in essence to       |
| CURRENT_USER(), only it works for all sessions, not just for the current      |
| session.                                                                      |
|                                                                               |
| SYNOPSIS                                                                      |
|                                                                               |
|                                                                               |
|                                                                               |
|        match_grantee(connection_user char(16) CHARSET utf8,                   |
|        connection_host char(70) CHARSET utf8)                                 |
|          RETURNS VARCHAR(100) CHARSET utf8                                    |
|                                                                               |
|                                                                               |
| Input:                                                                        |
|                                                                               |
| * connection_user: user login (e.g. as specified by PROCESSLIST)              |
| * connection_host: login host. May optionally specify port number (e.g.       |
|   webhost:12345), which is discarded by the function. This is to support      |
|   immediate input from as specified by PROCESSLIST.                           |
|                                                                               |
|                                                                               |
| EXAMPLES                                                                      |
|                                                                               |
| Find an account matching the given use+host combination:                      |
|                                                                               |
|                                                                               |
|        mysql> SELECT match_grantee('apps', '192.128.0.1:12345') AS            |
|        grantee;                                                               |
|        +------------+                                                         |
|        | grantee    |                                                         |
|        +------------+                                                         |
|        | 'apps'@'%' |                                                         |
|        +------------+                                                         |
|                                                                               |
|                                                                               |
|                                                                               |
| ENVIRONMENT                                                                   |
|                                                                               |
| MySQL 5.1 or newer                                                            |
|                                                                               |
| SEE ALSO                                                                      |
|                                                                               |
| processlist_grantees                                                          |
|                                                                               |
| AUTHOR                                                                        |
|                                                                               |
| Shlomi Noach                                                                  |
|                                                                               |
+-------------------------------------------------------------------------------+

I like HTML for documentation. I think it’s a good format, provided you don’t start doing funny things. Perhaps TROFF is more suitable; certainly more popular on Unix machines. But I already have everything in HTML. So, what do I do?

My decision was to keep documentation in HTML, and use the handy html2text tool to do the job. And it does it pretty well! The sample you see above is an automated translation of HTML to plain text.

I add a few touches of my own: SELECTing long texts is ugly, whether you do it via “;” or “\G“. The help() routine breaks the text by ‘\n‘, returning a multi row result set. The above sample makes for some 60+ rows, nicely formatted, broken from the original single text appearing in the help table.

So now you have an internal help method for common_schema, right where the code is. You don’t have to leave the command line client in order to get help.

Giuseppe offered me the idea for this, even while my own thinking about it was in early stages.

The next version of common_schema will be available in a few weeks. The code is pretty much ready. I just need to work on, ahem…, the documentation.

]]>
https://shlomi-noach.github.io/blog/mysql/documentation-in-sql-call-for-help/feed 2 4536
More MySQL foreach() https://shlomi-noach.github.io/blog/mysql/more-mysql-foreach https://shlomi-noach.github.io/blog/mysql/more-mysql-foreach#comments Fri, 02 Dec 2011 13:55:32 +0000 https://shlomi-noach.github.io/blog/?p=4171 In my previous post I’ve shown several generic use cases for foreach(), a new scripting functionality introduced in common_schema.

In this part I present DBA’s handy syntax for schema and table operations and maintenance.

Confession: while I love INFORMATION_SCHEMA‘s power, I just hate writing queries against it. It’s just so much typing! Just getting the list of tables in a schema makes for this heavy duty query:

SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='sakila' AND TABLE_TYPE='BASE TABLE';

When a join is involved this really becomes a nightmare. I think it’s cumbersome, and as result, many do not remember the names and meaning of columns, making for “oh, I need to read the manual all over again just to get that query right”. Anyway, that’s my opinion.

A SHOW TABLES statement is easier to type, but cannot be integrated into a SELECT query (though we have a partial solution for that, too), and besides, when filtering out the views, the SHOW statement becomes almost as cumbersome as the one on INFORMATION_SCHEMA.

Which is why foreach() offers handy shortcuts to common iterations on schemata and tables, as follows:

Use case: iterate all databases

call foreach('schema', 'CREATE TABLE ${schema}.event(event_id INT, msg VARCHAR(128))');

In the above we execute a query on each database. Hmmm, maybe not such a good idea to perform this operation on all databases? Let’s filter them:

Use case: iterate databases by name match

call foreach('schema like wordpress_%', 'ALTER TABLE ${schema}.wp_posts MODIFY COLUMN comment_author VARCHAR(96) NOT NULL');

The above will only iterate my WordPress databases (I have several of these), performing an ALTER on wp_posts for each of those databases.

I don’t have to quote the like expression, but I can, if I wish to.

I can also use a regular expression match:

call foreach('schema ~ /^wordpress_[0-9]+$/', 'ALTER TABLE ${schema}.wp_posts MODIFY COLUMN comment_author VARCHAR(96) NOT NULL');

Use case: iterate tables in a specific schema

Time to upgrade our sakila tables to InnoDB’s compressed format. We use $(), a synonym for foreach().

call $('table in sakila', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB ROW_FORMAT=COMPRESSED');

The above will iterate on tables in sakila. I say tables, since it will avoid iterating views (there is still no specific syntax for views iteration). This is done on purpose, as my experience shows there is very little in common between tables and views when it comes to maintenance and operations.

Use case: iterate tables by name match

Here’s a interesting scenario: you wish to work on all tables matching some name. The naive approach would be to:

SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'wp_posts' AND TABLE_TYPE = 'BASE TABLE'

Wait! Are you aware this may bring your server down? This query will open all databases at once, opening all .frm files (though thankfully not data files, since we only check for name and type).

Here’s a better approach:

call foreach('table like wp_posts', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB');

(There’s now FULLTEXT to InnoDB, so the above can make sense in the near future!)

The good part is that foreach() will look for matching tables one database at a time. It will iterate the list of database, then look for matching tables per database, thereby optimizing the query on INFORMATION_SCHEMA.

Here, too, I can use regular expressions:

call $('table ~ /^wp_.*$/', 'ALTER TABLE ${schema}.${table} ENGINE=InnoDB');

Conclusion

This is work in the making, but, as someone who maintains a few productions servers, I’ve already put it to work.

I’m hoping the syntax is easy to comprehend. I know that since I developed it it must be far more intuitive to myself than to others. I’ve tried to keep close on common syntax and concepts from various programming languages.

I would like to get as much feedback as possible. I have further ideas and thoughts on the direction common_schema is taking, but wish take it in small steps. Your feedback is appreciated!

]]>
https://shlomi-noach.github.io/blog/mysql/more-mysql-foreach/feed 2 4171
Test-driven SQL development https://shlomi-noach.github.io/blog/mysql/test-driven-sql-development https://shlomi-noach.github.io/blog/mysql/test-driven-sql-development#comments Thu, 20 Oct 2011 17:55:04 +0000 https://shlomi-noach.github.io/blog/?p=4036 I’m having a lot of fun writing common_schema, an SQL project which includes views, tables and stored routines.

As the project grows (and it’s taking some interesting directions, in my opinion) more dependencies are being introduced, and a change to one routine or view may affect many others. This is why I’ve turned the development on common_schema to be test driven.

Now, just how do you test drive an SQL project?

Well, much like the way you test any other project in your favorite programming language. If its functions you’re testing, that’s all too familiar: functions get some input and provide some output. Hmmm, they might be changing SQL data during that time. With procedures it’s slightly more complex, since they do not directly return output but result sets.

Here’s the testing scheme I use:

  • Tests are divided to families. For example, there is a family of tests for the eval() function.
  • Each test in a family is responsible for checking the simplest, most “atomic” issue. This means many small tests.
  • Each test can have a “pre-test” step, which prepares the ground (for example, create a table and populate it)
  • Likewise, a test can have a “post-test” step, which is typically just cleanup code (since the test is already complete by the time the post step is invoked).
  • Each test is an SQL file: a set of commands to be executed.
  • A test may have an “expected output” file.
  • If no explicit expected exists, the test is expected to return “1” (just as the most basic JUnit test assumes an “assert true”)
  • A test family may also have pre- and post- steps.
  • Any failure in any step fails the entire process. Failures may include:
    • Failure to prepare the grounds for a test or family of tests
    • Failure in executing the test
    • Mismatch between test’s output and expected result.
    • Failure in executing the post- step (may indicate yet invalid test result not intercepted by the test)

An example

The following image presents a single test family: the eval family, testing the eval() routine.

Test driven SQL development - sample

  • In this family, there are two tests.
  • In both tests, we have a pre-test step, and a test.
  • We have no post-test here.
  • Nor do we have an expected-output sample, which means the tests expect to return with “1”.

Implementation

But how are tests conducted? Via mysql, of course. While tests are plain SQL text file, they are being executed against a running MySQL server using the mysql client. It is given the test files as input, and its output is directed to file as well.

This makes it very easy to code the test using a simple shell script. It takes a small measure of file iteration, process invocation, exit code check, and diff execution.

For example, to test eval()‘s 01 test, we first execute mysql with 01/pre.sql as input. Assuming success, we execute mysql again, this time with 01/test.sql. We capture the output of this execution, and compare it with expected-output, or with “1” when no expected-output specified.

Tests pass, or no code!

Some 12 years ago, I worked with a less-known version system called aegis. The thing I remember most from aegis was that it had a good tests infrastructure. Long before “test-driven development” was coined, or was even commonly practiced, aegis supported tests right into your version control. “Right into”, in the sense that you could not merge your code back to the baseline if it didn’t pass all of the tests.

I work with SVN for common_schema, and I do not know of such an option in SVN. But I also use ant to build this project, and the dependency is clear there: ant dist, my target which creates the distribution files, is dependent on ant test, the target which works out the tests.

That is, you cannot generate the distribution files when tests fail.

Further notes

Since I’m now retroactively patching tests for already existing functionality, calling it test-driven development is an overstatement; nevertheless new tests are already proving invaluable when I keep changing and improving existing code. Suddenly dependent functionality no longer works as expected. What fun!

The code for the testing suite is actually much shorter than this blog post.

]]>
https://shlomi-noach.github.io/blog/mysql/test-driven-sql-development/feed 2 4036
oak-hook-general-log: your poor man’s Query Analyzer https://shlomi-noach.github.io/blog/mysql/oak-hook-general-log-your-poor-mans-query-analyzer https://shlomi-noach.github.io/blog/mysql/oak-hook-general-log-your-poor-mans-query-analyzer#respond Wed, 15 Dec 2010 17:46:06 +0000 https://shlomi-noach.github.io/blog/?p=3032 The latest release of openark kit introduces oak-hook-general-log, a handy tool which allows for some analysis of executing queries.

Initially I just intended for the tool to be able to dump the general log to standard output, from any machine capable to connect to MySQL. Quick enough, I realized the power it brings.

With this tool, one can dump to standard output all queries using temporary tables; or using a specific index; or doing a full index scan; or just follow up on connections; or… For example, the following execution will only log queries which make for filesort:

oak-hook-general-log --user=root --host=localhost --password=123456 --filter-explain-filesort

The problem with using the standard logs

So you have the general log, which you don’t often enable, since it tends to grow huge within moments. You then have the slow log. Slow log is great, and is among the top tools for MySQL diagnosis.

The slow log allows for log-queries-not-using-indexes, which is yet another nice feature. Not only should you log any query running for over X seconds, but also log any query which does not use an index.

Wait. This logs all single-row tables (no single row table will use an index), as well as very small tables (a common 20 rows lookup table will most often be scanned). These are OK scans. This makes for some noise in the slow log.

And how about queries which do use an index, but do so poorly? They use an index, but retrieve some 12,500,000 rows, using temporary table & filesort?

What oak-hook-general-log does for you

This tool streams out the general log, and filters out queries based on their role or on their execution plan.

To work at all, it must enable the general log. Moreover, it directs the general log to log table. Mind that this makes for a performance impact, which is why the tool auto-terminates and restores original log settings (default is 1 minute, configurable). It’s really not a tool you should keep running for days. But during the few moments it runs, it will:

  • Routinely rotate the mysql.general_log table so that it doesn’t fill up
  • Examine entries found in the general log
  • Cross reference entries to the PROCESSLIST so as to deduce database context (bug #52554)
  • If required and appropriate, evaluate a query’s execution plan
  • Decide whether to dump each entry based on filtering rules

Filtering rules

Filtering rules are passed as command line options. At current, only one filtering rule applies (if more than one specified only one is used, so no point in passing more than one). Some of the rules are:

  • filter-connection: only log connect/quit entries
  • filter-explain-fullscan: only log full table scans
  • filter-explain-temporary: only log queries which create implicit temporary tables
  • filter-explain-rows-exceed: only log queries where more than X number of rows are being accessed on some table (estimated)
  • filter-explain-total-rows-exceed: only log queries where more than X number of rows are accessed on all tables combined (estimated, with possibly incorrect numbers on some queries)
  • filter-explain-key: only log queries using a specific index. This feature somewhat overlaps with Maatkit’s mk-index-usage (read announcement).
  • filter-explain-contains: a general purpose grep on the execution plan. Log queries where the execution plan contains some text.

There are other filters, and I will possibly add more in due time.

Here are a couple cases I used oak-hook-general-log for:

Use case: temporary tables

I have a server with this alarming chart (courtesy mycheckpoint) of temporary tables:


What could possibly create 30 temporary tables per second on average?

The slow log produced nothing helpful, even with log-queries-not-using-indexes enabled. There were a lot of queries not using indexes there, but nothing at these numbers. With:

oak-hook-general-log --filter-explain-temporary

enabled for 1 minute, nothing came out. Weird. Enabled for 5 minutes, I got one entry. Turned out a scheduled script, acting once per 5 minutes, was making a single complicated query involving many nested views, which accounted for some hundreds of temporary tables created. All of them very small, query time was very fast. There is no temporary tables problem with this server, case closed.

Use case: connections

A server had issues with some exceptions being thrown on the client side. There was a large number of new connections created per second although the client was using a connection pool. Suspecting the pool didn’t work well, I issued:

oak-hook-general-log --filter-connect

The pool was working well, all right. No entries for that client were recorder in 1 minute of testing. However, it turned out some old script was flooding the MySQL server with requests, every second. The log showed root@somehost, and sure enough, the script was disabled. Exceptions were due to another reason; it was good to eliminate a suspect.

Some of the tool’s use case is relatively easy to solve with tail, grep & awk; others are not. I am using it more and more often, and find it to make significant shortcuts in tracking down queries.

Get it

Download the tool as part of openark kit: access the openark kit project page.

Or get the source code directly.

Feedback is most welcome.

]]>
https://shlomi-noach.github.io/blog/mysql/oak-hook-general-log-your-poor-mans-query-analyzer/feed 0 3032
openark-kit (rev. 170): new tools, new functionality https://shlomi-noach.github.io/blog/mysql/openark-kit-rev-170-new-tools-new-functionality https://shlomi-noach.github.io/blog/mysql/openark-kit-rev-170-new-tools-new-functionality#comments Wed, 15 Dec 2010 06:31:24 +0000 https://shlomi-noach.github.io/blog/?p=3124 I’m pleased to announce a new release of the openark kit. There’s a lot of new functionality inside; following is a brief overview.

The openark kit is a set of utilities for MySQL. They solve everyday maintenance tasks, which may be complicated or time consuming to work by hand.

It’s been a while since the last announced release. Most of my attention was on mycheckpoint, building new features, writing documentation etc. However my own use of openark kit has only increased in the past few months, and there’s new useful solutions to common problems that have been developed.

I’ve used and improved many tools over this time, but doing the final cut, along with proper documentation, took some time. Anyway, here are the highlights:

New tool: oak-hook-general-log

oak-hook-general-log hooks up a MySQL server and dumps the general log based on filtering rules, applying to query role or execution plan. It is possible to only dump connect/disconnect entries, queries which make a full table scan, or use temporary tables, or scan more than X number of rows, or…

I’ll write more on this tool shortly.

New tool: oak-prepare-shutdown

This tool makes for an orderly and faster shutdown by safely stopping replication, and flushing InnoDB pages to disk prior to shutting down (keeping server available for connections even while attempting to flush dirty pages to disk). A typical use case would be:

oak-prepare-shutdown --user=root --ask-pass --socket=/tmp/mysql.sock && /etc/init.d/mysql stop

New tool: oak-repeat query

oak-repeat-query repeats executing a given query until some condition holds. The condition can be:

  • Number of given iterations has been reached
  • Given time has elapsed
  • No rows have been affected by query

The tool comes in handy for cleanup jobs, warming up caches, etc.

New tool: oak-get-slave-lag

This simple tool just returns the number of seconds a slave is behind master. But it also returns with an appropriate exit code, based on a given threshold: 0 when lag is good, 1 (error exit code) when lag is too great or slave fails to replicate.

This tool has been used by 3rd party applications, such as a load balancer, to determine whether a slave should be accessed.

Updated tool: oak-chunk-update

This extremely useful utility breaks down very long queries into smaller chunks. These could be queries which should affect a huge amount of rows, or queries which cannot utilize an index.

Updates to the tool include limiting the range of rows the tool scans, by specifying start and stop position (either by providing constant values or by SELECT query). Also added is auto-termination when no rows are found to be affected. Last, it is possible to override INFORMATION_SCHEMA lookup by explicitly specifying chunking key.

This tool works great for your daily/weekly/monthly batch jobs; in creating DWH tables; populating new columns; purging old entries; clearing data based on non-indexed values; generating summary tables; and more.

Frozen tool: oak-apply-ri

I haven’t been using this tool for a while. The main work down by this tool can be done with oak-chunk-update. There are some additional safety checks oak-apply-ri provides; I’m thinking over if they justify the tool’s existence.

Frozen tool: oak-online-alter-table

With the appearance of Facebook’s Online Schema Change (OSC) tool, which derives from oak-online-alter-table, I’m not sure I will continue developing the tool. I intend to wait for general feedback on OSC before making a decision.

Documentation

Documentation is now part of openark kit‘s SVN repository.

Download

The openark kit project is currently hosted by Google Code. Downloads are available at the Google Code openark kit project page.

Downloads are available in the following packaging formats:

  • .deb package, to be installed on debian, ubuntu and otherwise debian based distributions.
  • .rpm package, architecture free (noarch), for RPM supporting Linux distributions such as RedHat, Fedora, CentOS etc.
  • .tar.gz using python’s distutils installer.
  • source, directly retrieved from SVN or from above python package.
  • Some distribution specific RPM packages, courtesy Lenz Grimmer.

Feedback

Your feedback is welcome! I may not always respond promptly; and I confess that some bugs were left open for more than I would have liked them to. I hope to make for good quality of code, and bug reporting is one major factor you can control.

]]>
https://shlomi-noach.github.io/blog/mysql/openark-kit-rev-170-new-tools-new-functionality/feed 3 3124
Thoughts and ideas for Online Schema Change https://shlomi-noach.github.io/blog/mysql/thoughts-and-ideas-for-online-schema-change https://shlomi-noach.github.io/blog/mysql/thoughts-and-ideas-for-online-schema-change#comments Thu, 07 Oct 2010 08:29:10 +0000 https://shlomi-noach.github.io/blog/?p=3005 Here’s a few thoughts on current status and further possibilities for Facebook’s Online Schema Change (OSC) tool. I’ve had these thoughts for months now, pondering over improving oak-online-alter-table but haven’t got around to implement them nor even write them down. Better late than never.

The tool has some limitations. Some cannot be lifted, some could. Quoting from the announcement and looking at the code, I add a few comments. I conclude with a general opinion on the tool’s abilities.

“The original table must have PK. Otherwise an error is returned.”

This restriction could be lifted: it’s enough that the table has a UNIQUE KEY. My original oak-online-alter-table handled that particular case. As far as I see from their code, the Facebook code would work just as well with any unique key.

However, this restriction is of no real interest. As we’re mostly interested in InnoDB tables, and since any InnoDB table should have a PRIMARY KEY, we shouldn’t care too much.

“No foreign keys should exist. Otherwise an error is returned.”

Tricky stuff. With oak-online-alter-table, changes to the original table were immediately reflected in the ghost table. With InnoDB tables, that meant same transaction. And although I never got to update the text and code, there shouldn’t be a reason for not using child-side foreign keys (the child-side is the table on which the FK constraint is defined).

The Facebook patch works differently: it captures changes and writes them to a delta table,  to be later (asynchronously) analyzed and make for a replay of actions on the ghost table.

So in the Facebook code, some cases will lead to undesired behavior. Consider two tables, country and city, with city holding a RESTRICT/NO ACTION foreign key on country‘s id. Now consider the scenario:

  1. Rows from city are DELETEd, where the country Id is Spain’s.
    • city‘s ghost table is still unaffected, Spain’s cities are still there.
    • A change is written to the delta table to mark these rows for deletion.
  2. A DELETE is issued on country‘s Spain record.
    • The DELETE should work, from the user’s perspective
    • But it will fail: city’s ghost table has not received the changes yet. There’s still matching rows. The NO ACTION constraint will fail the DELETE statement.

Now, this does not lead to corruption, just to seemingly unreasonable behavior on the database part. This behavior is probably undesired. NO ACTION constraint won’t do.

However, with CASCADE or SET NULL options, there is less of an issue: operations on the parent table (e.g. country) cannot fail. We must make sure operations on the ghost table make it consistent with the original table (e.g. city).

Consider the following scenario:

  1. A new country is created, called “Sleepyland”. An INSERT is made to country.
    • Both city and city‘s ghost are immediately aware of it.
  2. A new town is created and INSERTed to city. The town is called “Naphaven”.
    • The change takes time to propagate to city‘s ghost table.
  3. Meanwhile, we realized we made a mistake. We’ve been had. There’s no such city nor country.
    1. We DELETE “Naphaven” from city.
    2. We DELETE “Sleepyland” from country.
    • Note that city‘s ghost table still hasn’t caught up with the changes.
  4. Eventually, the INSERT statement for “Naphaven” reaches city‘s ghost table.
    • What should happen now? The INSERT cannot succeed.
    • Will this fail the entire process?

Looking at the PHP code, I see that changes written on the delta table are blindly replayed on the ghost table.

Since the process is asynchronous, this should not be the case. We can solve the above if we use INSERT IGNORE instead of INSERT. The statement will fail without failing anything else. The row cannot exist, and that’s because the original row does not exist anymore.

Unlike a replication corruption, this does not lead to accumulation mistakes. The replay is static, somewhat like in binary log format. Changes are just written, regardless of existing data.

I have given this considerable thought, and I can’t say I’ve covered all the possible scenario. However I believe that with proper use of INSERT IGNORE and REPLACE INTO (two statements I heavily relied on with oak-online-alter-table), correctness can be achieved.

There’s the small pain of re-generating the foreign key definition on the “ghost” table (CREATE TABLE LIKE … does not copy FK definitions). And since foreign key names are unique, a new name must be picked up. Not pretty, but perfectly doable.

“No AFTER_{INSERT/UPDATE/DELETE} triggers must exist.”

It would be nicer if MySQL had an ALTER TRIGGER statement. There isn’t such statement. If there were such an atomic statement, then we would be able to rewrite the trigger, so as to add our own code to the end of the trigger’s code. Yuck. Would be even nicer if we were allowed to have multiple triggers of same event.

So, we are left with DROP and CREATE triggers. Alas, this makes for a short period where the trigger does not exist. Bad. The easy solution would be to LOCK WRITE the table, but apparently you can’t DROP the trigger (*) when the table is locked. Sigh.

(*) Happened to me, apparently to Facebook too; With latest 5.1 (5.1.51) version this actually works. With 5.0 it didn’t use to; this needs more checking.

Use of INFORMATION_SCHEMA

As with oak-online-alter-table, the OSC checks for triggers, indexes, column by searching on the INFORMATION_SCHEMA tables. This makes for nice SQL for getting the exact listing and types of PRIMARY KEY columns, whether or not AFTER triggers exist, and so on.

I’ve always considered this to be the weak part of openark-kit, that it relies on INFORMATION_SCHEMA so much. It’s easier, it’s cleaner, it’s even more correct to work that way — but it just puts too much locks. I think Baron Schwartz (and now Daniel Nichter) did amazing work on analyzing table schemata by parsing the SHOW CREATE TABLE and other SHOW commands regex-wise with Maatkit. It’s a crazy work! Had I written openark-kit in Perl, I would have just import their code. But I’m too lazy busy to do the conversion from Perl to Python, and rewrite that code, what with all the debugging.

OSC is written in PHP. Again, much conversion work. I think performance-wise this is an important step to make.

A word for the critics

Finally, a word for the critics. I’ve read some Facebook/MySQL bashing comments and wish to relate.

In his interview to The Register, Mark Callaghan gave the example that “Open Schema Change lets the company update indexes without user downtime, according to Callaghan”.

PostgreSQL was mentioned for being able to add index with only read locks taken, or being able to do the work with no locks using CREATE INDEX CONCURRENTLY. I wish MySQL had that feature! Yes, MySQL has a lot to improve upon, and the latest PostgreSQL 9.0 brings valuable new features. (Did I make it clear I have no intention of bashing PostgreSQL? If not, please re-read this paragraph until convinced).

Bashing related to the notion of MySQL being so poor that Facebook used an even poorer mechanism to work out the ALTER TABLE.

Well, allow me to add a few words: the CREATE INDEX is by far not the only thing you can achieve with OSC (although it may be Facebook’s major concern). You should be able to:

  • Add columns
  • Drop columns
  • Convert character sets
  • Modify column types
  • Add partitioning
  • Reorganize partitioning
  • Compress the table
  • Otherwise changing table format
  • Heck, you could even modify the storage engine! (To other transactional engine)

These are giant steps. How easy would it be to write these down into the database? It only takes a few weeks time to work out a working solution with reasonable limitations, just using the resources the MySQL server provides you with. The MySQL@Facebook team should be given credit for that.

]]>
https://shlomi-noach.github.io/blog/mysql/thoughts-and-ideas-for-online-schema-change/feed 8 3005
Table refactoring & application version upgrades, Part II https://shlomi-noach.github.io/blog/mysql/table-refactoring-application-version-upgrades-part-ii https://shlomi-noach.github.io/blog/mysql/table-refactoring-application-version-upgrades-part-ii#respond Thu, 12 Aug 2010 03:24:06 +0000 https://shlomi-noach.github.io/blog/?p=2801 Continuing Table refactoring & application version upgrades, Part I, we now discuss code & database upgrades which require DROP operations. As before, we break apart the upgrade process into sequential steps, each involving either the application or the database, but not both.

As I’ll show, DROP operations are significantly simpler than creation operations. Interestingly, it’s the same as in life.

DROP COLUMN

A column turns to be redundant, unused. Before it is dropped from the database, we must ensure no one is using it anymore. The steps are:

  1. App: V1 -> V2. Remove all references to column; make sure no queries use said column.
  2. DB: V1 -> V2 (possibly failover from M1 to M2), change is DROP COLUMN.

DROP INDEX

A possibly simpler case here. Why would you drop an index? Is it because you found out you never use it anymore? Then all you have to do is just drop it.

Or perhaps you don’t need the functionality the index supports anymore? Then first drop the functionality:

  1. (optional) App: V1 -> V2. Discard using functionality which relies on index.
  2. DB: V1 -> V2 (possibly failover from M1 to M2), change is DROP INDEX. Check out InnoDB Plugin here.

DROP UNIQUE INDEX

When using Master-Slave failover for table refactoring, we’re now removing a constraint from the slave. Since the master is more constrained than the slave, there is no problem here. It’s mostly the same as with a normal DROP INDEX, with a minor addition:

  1. (optional) App: V1 -> V2. Discard using functionality which relies on index.
  2. DB: V1 -> V2 (possibly failover from M1 to M2), change is DROP INDEX.
  3. (optional) App: V2 -> V3. Enable functionality that inserts duplicates.

DROP FOREIGN KEY

Again, we are removing a constraint.

  1. DB: V1 -> V2 (possibly failover from M1 to M2), change is DROP INDEX.
  2. (optional) App: V2 -> V3. Enable functionality that conflicts with removed constraint. I mean, if you really know what you are doing.

DROP TABLE

The very simple steps are:

  1. App: V1 -> V2. Make sure no reference to table is made.
  2. DB: V1 -> V2. Issue a DROP TABLE.

With ext3 dropping a large table is no less than a nightmare. Not only does the action take long time, it also locks down the table cache, which very quickly leads to having dozens of queries hang. xfs is a good alternative.

Conclusion

We looked at single table operations, coupled with application upgrades. By carefully looking at the process breakdown, multiple changes can be addressed with ease and safety. Not all operations are completely safe when used with replication failover. But they are mostly safe if you have some trust in your code.

]]>
https://shlomi-noach.github.io/blog/mysql/table-refactoring-application-version-upgrades-part-ii/feed 0 2801