Introducing Propagator: multi-everything deployment made easy

I’m happy to release Outbrain’s Propagator as open source. Propagator is a schema & data deployment tool which makes it easy to deploy, review, audit & fix deployments to your database servers.

What does multi-everything mean? It is:

  • Multi-server: push your schema & data changes to multiple instances in parallel
  • Multi-role: different servers have different schemas
  • Multi-environment: recognizes the differences between development, QA, build & production servers
  • Multi-technology: supports MySQL, Hive (Cassandra on the TODO list)
  • Multi-user: allows users authenticated and audited access
  • Multi-planetary: TODO

With dozens of database servers in our company (and these are master database servers), from development machines to testing machines, through build machines to production servers, and with a growing team of over 70 engineers, we faced the growing problem of controlling our database schema evolution. Controlling creation of tables, columns, keys, foreign keys; controlling creation of data that must be consistent across all servers became an infeasible task. Some changes were lost; some servers forgotten along the way, and inconsistencies blocked our development & deployments again and again.

We have reviewed schema-versioning tools (e.g. FlywayDB and Liquibase) only to conclude they solve a fraction of the problem. We looked at some GUI tools that promise to deliver the solution, but frankly any Windows Desktop GUI application is by definition the wrong tool for the job, and not (only) because of the “Windows” part.

Not all deployments are the same. Not all servers are the same. You don’t ALTER a big table on production. You may be using different schema names on different servers. You may have multiple schemas on a single server with identical structure. You may wish to only deploy to some development servers, possibly to a test server, but not to all, and yet be able to pick up on where you left a few days later on to complete your deployment. Some deployments fail, and for various reasons (e.g. John created that table manually on this particular test server, so obviously you can’t CREATE it again), and you want to be able to skip it, or mark it as “OK”, or put some comments, or hint that you need assistance from a DBA. You want to be able to quickly add new servers to your deployment group.

Above all, you want every single deployment to be fully audit-able. You want to know exactly who did what and when. If their deployment failed, you want to know that. You want to know why it failed. You want to be able to pick up and try it again, after your DBA found the problem. You want to be able to review yesterday’s deployments, and be able to contact Jane and say “hey, I see you hit a problem here; I know what the problem is; you should do this and that, then please try to deploy again”.

There’s so much more, but I’ll stop telling you what you want to have, since there’s a good manual available. Continue reading » “Introducing Propagator: multi-everything deployment made easy”

Speaking at Percona Live: common_schema, MySQL DevOps

In less than a month I’ll be giving these two talks at Percona Live:

If you are still unfamiliar with common_schema, this will make for a good introduction. I’ll give you multiple reasons why you would want to use it, and how it would come to immediate use at your company. I do mean immediate, as in previous common_schema presentations I happened to get feedback emails from attendees within the same or next day letting me know how common_schema solved an insistent problem of theirs or how it exposed an unknown status.

I’ll review some useful views & routines, and discuss the ease and power of QueryScript. common_schema is a Swiss-knife of solutions, and all from within your MySQL server.

I am using common_schema in production on a regular basis, and it happened to be hero of the day in multiple occasions. I’ll present a couple such cases.

common_schema 2.2: DBA's framework for MySQL (April 2014) from Shlomi Noach

This is a technical talk touching at some cultural issues.

At Outbrain, where I work, we have two blessings: a large group of engineers and a large dataset. We at the infrastructure team, together with the ops team, are responsible for the availability of the data. What we really like is technology which lets the owners of a problem be able to recognize it and take care of it. We want ops guys to do ops, and engineers to do engineering. And we want them to be able to talk to each other and understand each other.

What tools can you use to increase visibility? To allow sharing of data between the teams? I’ll share some tools and techniques that allow us to automate deployments, detect a malfunctioning/abusing service, deploy schema changes across dozens of hosts, control data retention, monitor connections, and more.

We like open source. The tools discussed are mostly open source, or open sourced by Outbrain.

I’ll explain why these tools matter, and how they serve the purpose of removing friction between teams, allowing for quick analysis of problems and overall visibility on all things that happen.

MySQL DevOps at Outbrain from Shlomi Noach

Do come by!

Percona Live – call for “Hall of Shame” talks

We’ve got some spare time on Percona Live during the lightning talks session, and are spontaneously calling for “Hall of Shame” submissions.

What is this about?

We just had a wonderful Reversim Summit a couple weeks back, where we held the “Hall of Shame” session. We are used to hear talks about success stories and great new technologies. Well, this session is your chance to come up and say: “I messed up, and I’m proud of it!”

You will have 3-4 minutes to tell us about how you once accidentally dropped your database; corrupted your data; brought your company’s service down. The greater the damage, the greater the appeal! But we’re looking for the funny edge – not for a tragedy. There are no slides. Just a “Hall of Shame” screen behind you.

The response we got on Reversim Summit? It was amazing. The audience was literally in tears; there were such hilarious stories that we could hardly keep up. People were spontaneously offering their stories and the organizers had to hold them back.

And yet, you will be telling about your mess up – so please make sure you feel OK about this. For what it’s worth, I will contribute my own shameful, shameful story.

So, this is new & experimental for the Percona Live conference, and we don’t have many slots. If no one submits – that’s OK. If too many submit, we’ll have to cut most. As conferences go, we may end up with a last moment open timeslot, so if you’re spontaneous that could be your chance.

Ready to submit?

Please send an email to mysql.hallofshame@gmail.com with a brief description of what you want to share. I’ll be reviewing these submissions and either approve, reject or hold you on a waiting list. I assume this will go by First Come First Served. The deadline for submissions is Friday, Mar 14th.

mycheckpoint, discontinued

Time to admit to myself: mycheckpoint has to be discontinued.

I started mycheckpoint back in 2009, as a free & open source lightweight monitoring tool for MySQL. Over some years it evolved and became an actual (lightweight) monitoring solution, used by many. It has a unique and original design, which, alas, is also its bane.

mycheckpoint uses the relational model & SQL to store and query monitored metrics. This leads to quite a sophisticated service, which can make practically anything visible to the user. The raw data is just numbers. but with some SQL-Fu one can generate charts out of it,  (interactive ones as well), human readable reports and full blown email messages. It is still the only common solution I’m aware of that keeps track of variable changes and provides with clear “what changed, when, from value & to_value”. I caught many deployment bugs by just observing this. It’s a single file that provides with full blown HTTP service, alerting, mail notifications, multi-database monitoring, custom monitoring queries, query execution time monitoring, OS metrics, …

While developing mycheckpoint I learned a lot on MySQL status & configuration, complex SQL queries, Python, linux, packaging and more. I got a lot of feedback from users, as I still do (thank you!). Didn’t always manage to fix all bugs or answer all questions.

The design of mycheckpoint does not meet today’s reality. Heck, today there are more counters & variables than possible table columns. The schema-per-monitored-instance design makes for simplicity, but does not fare well with dozens or hundreds of servers to monitor. There is no cross-instance aggregation or visualization of data. The per-10 minute aggregation is too rough. There isn’t a test suite.

Some of the above issues can be fixed, and if you like, the source code is still freely available. I’ll even migrate the entire SVN to GitHub at some stage. But I believe the current state might only be good for small scale deployments;  not something you would consider to scale up with.

For me, there’s nothing more motivating in code development than knowing the code will go public. The efforts in making the code look as best it can, as easily deployable as possibly can, with good documentation, makes for a lot of effort – but very satisfying. Open Source FTW!!!1

 

The “once and for all” SHOW SLAVE STATUS log files & positions explained

True, GTID is upon us whether via MySQL 5.6 or Tungsten Replicator (and wasn’t it in Google Patches since 2009?).

But some of us are still using standard replication with MySQL 5.5, and the “what’s with all these binary log files and positions” question is ever erupting. The output of SHOW SLAVE STATUS confuses people new to it. It confuses me time and again.

So here’s the semi visual guide to interpreting the SHOW SLAVE STATUS.

About binary logs and relay logs

A master writes binary logs. These are typically and conventionally called mysql-bin.##### or mysqld-bin.##### (replace ##### with digits).

A slave connects to its master, and reads entries from the master’s binary logs. The slave writes those entries into its own relay logs. These are typically and conventionally called mysql-relay.##### or mysqld-relay.##### (replace ##### with digits).

There is nothing at all that connects the name or number of a slave’s relay log with the master’s binary log. There is nothing at all that connects the position within the relay log with the position within the master binary log. Files are flushed/rotated; have different size configuration; are re-created. However the slave does keep track on the current relay-log entry: it knows what’s the matching entry on the master’s binary logs. This is an important piece of information.

While the slave fetches entries and writes them into the relay log (via the IO_THREAD), it also reads the relay log to replay those entries (via the SQL_THREAD).

And so at each point in time we are interested in the following “coordinates”:

  • What are we fetching from the master? Which file are we fetching and from which position?
  • Where are we writing this to? (This is implicitly the latest relay log file and its size)
  • What’s the position of currently executing slave query, in relay-log coordinates? As the slave lags these coordinates are farther (smaller) than the written-to position.
  • What’s the position of currently executing slave query, in master binary-log coordinates? This information really tells us how far apart we are from the master. Continue reading » “The “once and for all” SHOW SLAVE STATUS log files & positions explained”