Speaking at Percona Live: common_schema, MySQL DevOps

In less than a month I’ll be giving these two talks at Percona Live:

If you are still unfamiliar with common_schema, this will make for a good introduction. I’ll give you multiple reasons why you would want to use it, and how it would come to immediate use at your company. I do mean immediate, as in previous common_schema presentations I happened to get feedback emails from attendees within the same or next day letting me know how common_schema solved an insistent problem of theirs or how it exposed an unknown status.

I’ll review some useful views & routines, and discuss the ease and power of QueryScript. common_schema is a Swiss-knife of solutions, and all from within your MySQL server.

I am using common_schema in production on a regular basis, and it happened to be hero of the day in multiple occasions. I’ll present a couple such cases.

common_schema 2.2: DBA's framework for MySQL (April 2014) from Shlomi Noach

This is a technical talk touching at some cultural issues.

At Outbrain, where I work, we have two blessings: a large group of engineers and a large dataset. We at the infrastructure team, together with the ops team, are responsible for the availability of the data. What we really like is technology which lets the owners of a problem be able to recognize it and take care of it. We want ops guys to do ops, and engineers to do engineering. And we want them to be able to talk to each other and understand each other.

What tools can you use to increase visibility? To allow sharing of data between the teams? I’ll share some tools and techniques that allow us to automate deployments, detect a malfunctioning/abusing service, deploy schema changes across dozens of hosts, control data retention, monitor connections, and more.

We like open source. The tools discussed are mostly open source, or open sourced by Outbrain.

I’ll explain why these tools matter, and how they serve the purpose of removing friction between teams, allowing for quick analysis of problems and overall visibility on all things that happen.

MySQL DevOps at Outbrain from Shlomi Noach

Do come by!

Percona Live – call for “Hall of Shame” talks

We’ve got some spare time on Percona Live during the lightning talks session, and are spontaneously calling for “Hall of Shame” submissions.

What is this about?

We just had a wonderful Reversim Summit a couple weeks back, where we held the “Hall of Shame” session. We are used to hear talks about success stories and great new technologies. Well, this session is your chance to come up and say: “I messed up, and I’m proud of it!”

You will have 3-4 minutes to tell us about how you once accidentally dropped your database; corrupted your data; brought your company’s service down. The greater the damage, the greater the appeal! But we’re looking for the funny edge – not for a tragedy. There are no slides. Just a “Hall of Shame” screen behind you.

The response we got on Reversim Summit? It was amazing. The audience was literally in tears; there were such hilarious stories that we could hardly keep up. People were spontaneously offering their stories and the organizers had to hold them back.

And yet, you will be telling about your mess up – so please make sure you feel OK about this. For what it’s worth, I will contribute my own shameful, shameful story.

So, this is new & experimental for the Percona Live conference, and we don’t have many slots. If no one submits – that’s OK. If too many submit, we’ll have to cut most. As conferences go, we may end up with a last moment open timeslot, so if you’re spontaneous that could be your chance.

Ready to submit?

Please send an email to mysql.hallofshame@gmail.com with a brief description of what you want to share. I’ll be reviewing these submissions and either approve, reject or hold you on a waiting list. I assume this will go by First Come First Served. The deadline for submissions is Friday, Mar 14th.

mycheckpoint, discontinued

Time to admit to myself: mycheckpoint has to be discontinued.

I started mycheckpoint back in 2009, as a free & open source lightweight monitoring tool for MySQL. Over some years it evolved and became an actual (lightweight) monitoring solution, used by many. It has a unique and original design, which, alas, is also its bane.

mycheckpoint uses the relational model & SQL to store and query monitored metrics. This leads to quite a sophisticated service, which can make practically anything visible to the user. The raw data is just numbers. but with some SQL-Fu one can generate charts out of it,  (interactive ones as well), human readable reports and full blown email messages. It is still the only common solution I’m aware of that keeps track of variable changes and provides with clear “what changed, when, from value & to_value”. I caught many deployment bugs by just observing this. It’s a single file that provides with full blown HTTP service, alerting, mail notifications, multi-database monitoring, custom monitoring queries, query execution time monitoring, OS metrics, …

While developing mycheckpoint I learned a lot on MySQL status & configuration, complex SQL queries, Python, linux, packaging and more. I got a lot of feedback from users, as I still do (thank you!). Didn’t always manage to fix all bugs or answer all questions.

The design of mycheckpoint does not meet today’s reality. Heck, today there are more counters & variables than possible table columns. The schema-per-monitored-instance design makes for simplicity, but does not fare well with dozens or hundreds of servers to monitor. There is no cross-instance aggregation or visualization of data. The per-10 minute aggregation is too rough. There isn’t a test suite.

Some of the above issues can be fixed, and if you like, the source code is still freely available. I’ll even migrate the entire SVN to GitHub at some stage. But I believe the current state might only be good for small scale deployments;  not something you would consider to scale up with.

For me, there’s nothing more motivating in code development than knowing the code will go public. The efforts in making the code look as best it can, as easily deployable as possibly can, with good documentation, makes for a lot of effort – but very satisfying. Open Source FTW!!!1

 

The “once and for all” SHOW SLAVE STATUS log files & positions explained

True, GTID is upon us whether via MySQL 5.6 or Tungsten Replicator (and wasn’t it in Google Patches since 2009?).

But some of us are still using standard replication with MySQL 5.5, and the “what’s with all these binary log files and positions” question is ever erupting. The output of SHOW SLAVE STATUS confuses people new to it. It confuses me time and again.

So here’s the semi visual guide to interpreting the SHOW SLAVE STATUS.

About binary logs and relay logs

A master writes binary logs. These are typically and conventionally called mysql-bin.##### or mysqld-bin.##### (replace ##### with digits).

A slave connects to its master, and reads entries from the master’s binary logs. The slave writes those entries into its own relay logs. These are typically and conventionally called mysql-relay.##### or mysqld-relay.##### (replace ##### with digits).

There is nothing at all that connects the name or number of a slave’s relay log with the master’s binary log. There is nothing at all that connects the position within the relay log with the position within the master binary log. Files are flushed/rotated; have different size configuration; are re-created. However the slave does keep track on the current relay-log entry: it knows what’s the matching entry on the master’s binary logs. This is an important piece of information.

While the slave fetches entries and writes them into the relay log (via the IO_THREAD), it also reads the relay log to replay those entries (via the SQL_THREAD).

And so at each point in time we are interested in the following “coordinates”:

  • What are we fetching from the master? Which file are we fetching and from which position?
  • Where are we writing this to? (This is implicitly the latest relay log file and its size)
  • What’s the position of currently executing slave query, in relay-log coordinates? As the slave lags these coordinates are farther (smaller) than the written-to position.
  • What’s the position of currently executing slave query, in master binary-log coordinates? This information really tells us how far apart we are from the master. Continue reading » “The “once and for all” SHOW SLAVE STATUS log files & positions explained”

MySQL Community Awards 2014: Call for Nominations!

The 2014 MySQL Community Awards event will take place, as usual, in Santa Clara, April 2014, during the Percona Live MySQL Conference & Expo (currently scheduled at Thursday, April 3rd 2014).

The MySQL Community Awards is a community based initiative. The idea is to publicly recognize contributors to the MySQL ecosystem. The entire process of discussing, voting and awarding is controlled by an independent group of community members, typically based of past winners or their representatives, as well as known contributors.

It is a self-appointed, self-declared, self-making-up-the-rules-as-it-goes committee. It is also very aware of the importance of the community; a no-nonsense, non-political, adhering to tradition, self criticizing committee.

The Call for Nominations is open. We are seeking the community’s assistance in nominating candidates in the following categories:

MySQL Community Awards: Community Contributor of the year 2014

This is a personal award; a winner would a person who has made contribution to the MySQL ecosystem. This could be via development, advocating, blogging, speaking, supporting, etc. All things go.

MySQL Community Awards: Application of the year 2014

An application, project, product etc. which supports the MySQL ecosystem by either contributing code, complementing its behaviour, supporting its use, etc. This could range from a one man open source project to a large scale social service.

MySQL Community Awards: Corporate Contributor of the year 2013

A company who made contribution to the MySQL ecosystem. This might be a corporate which released major open source code; one that advocates for MySQL; one that help out community members by… anything.

Continue reading » “MySQL Community Awards 2014: Call for Nominations!”

Why delegating code to MySQL Stored Routines is poor engineering practice

I happen to use stored routines with MySQL. In fact, my open source project common_schema heavily utilizes them. DBA-wise, I think they provide with a lot of power (alas, the ANSI:SQL 2003 syntax feels more like COBOL than a sane programming language, which is why I use QueryScript instead).

However I wish to discuss the use of stored routines as integral part of your application code, which I discourage.

The common discussion on whether to use or not use stored routines typically revolves around data transfer (with stored routines you transfer less data since it’s being processed on server side), security (with stored routines you can obfuscate/hide internal datasets, and provide with limited and expected API) and performance (with MySQL this is not what you would expect, as routines are interpreted & their queries re-evaluated, as opposed to other RDBMS you may be used to).

But I wish to discuss the use of stored routines from an engineering standpoint. The first couple of points I raise are cultural/behavioural.

2nd grade citizens

Your stored routines are not likely to integrate well with your IDE. While your Java/Scala/PHP/Ruby/whatnot code comfortably lies within your home directory, the stored routines live in their own space: a database container. They’re not as visible to you as your standard code. Your IDE is unaware of their existence and is unlikely to have the necessary plugin/state of mind to be able to view these.

This leads to difficulty in maintaining the code. People typically resort to using some SQL-oriented GUI tool such as MySQL Workbench, SequelPro or other, commercial tools. But these tools, while make it easy to edit your routine code, do not integrate (well?) with your source control. I can’t say I’ve used all GUI tools; but how many of them will have Git/SVN/Mercurial connectors? How many of them will keep local history changes once you edit a routine? I’m happy to get introduced to such a tool.

Even with such integration, you’re split between two IDEs. And if you’re the command line enthusiast, well, you can’t just svn ci -m “fixed my stored procedure bug”. Your code is simply not in your trunk directory.

It can be done. You could maintain the entire routine code from within your source tree, and hats off to all those who do it. Most will not. See later on about deployments for more on this. Continue reading » “Why delegating code to MySQL Stored Routines is poor engineering practice”

Seconds_behind_master vs. Absolute slave lag

I am unable to bring myself to trust the Seconds_behind_master value on SHOW SLAVE STATUS. Even with MySQL 5.5‘s CHANGE MASTER TO … MASTER_HEARTBEAT_PERIOD (good thing, applied when no traffic goes from master to slave) it’s easy and common to find fluctuations in Seconds_behind_master value.

And, when sampled by your favourite monitoring tool, this often leads to many false negatives.

At Outbrain we use HAProxy as proxy to our slaves, on multiple clusters. More about that in a future post. What’s important here is that our decision whether a slave enters or leaves a certain pool (i.e. gets UP or DOWN status in HAProxy) is based on replication lag. Taking slaves out when they are actually replicating well is bad, since this reduces the amount of serving instances. Putting slaves in the pool when they are actually lagging too much is bad as they contain invalid, irrelevant data.

To top it all, even when correct, the Seconds_behind_master value is practically irrelevant on 2nd level slaves. In a Master -> Slave1 -> Slave2 setup, what does it mean that Slave2 has Seconds_behind_master = 0? Nothing much to the application: Slave1 might be lagging an hour behind the master, or may not be replicating at all. Slave2 might have an hour’s data missing even though it says its own replication is fine.

None of the above is news, and yet many fall in this pitfall. The solution is quite old as well; it is also very simple: do your own heartbeat mechanism, at your favourite time resolution, and measure slave lag by timestamp you yourself updated on the master.

Maatkit/percona-toolkit did this long time ago with mk-heartbeat/pt-heartbeat. We’re doing it in a very similar manner. The benefit is obvious. Consider the following two graphs; the first shows Seconds_behind_master, the seconds shows our own Absolute_slave_lag measurement. Continue reading » “Seconds_behind_master vs. Absolute slave lag”

Percona Live 2014 schedule released; BoF and Lightning Talks Call for Papers continues

The complete tutorial & session schedule for Percona Live MySQL Conference & Expo 2014 is released. This schedule offers both a sense of achievement as well as a sense of regret; for I believe the schedule is very good, and yet some good proposals had to be left out.

This is an inevitable result of a conference that is popular and receives far more proposals than can fit within the time frames. This conference offers 96 session slots and 16 3-hour tutorial slots. We got well over 300 proposals — I’m not even sure how to count them — and they just can’t all fit in. My sincere apologies to all those left out. A proposal of mine was just rejected yesterday from another conference; I can sympathize and empathize with all turned down.

As part of our interest in having a diversity of talks and speakers, we have promoted talks by less frequent speakers and newly presenting companies. We are happy to grow the community!

Although titled “Percona Live”, this conference’s program is managed by a diverse and independent committee. We had good discussions and some very good thinking and advice were offered. I’m happy to acknowledge and thank the committee members:

  • Cédric Peintre, Dailymotion
  • Giuseppe Maxia, Continuent
  • Ivan Zoratti, SkySQL
  • Jay Janssen, Percona
  • Jeremy Cole, Google
  • Laine Campbell, PalominoDB (now Blackbird, congrats!)
  • Liz van Dijk, Percona
  • Roland Bouman, Pentaho
  • Tim Callaghan, Tokutek
  • Todd Farmer, Oracle
  • myself, Outbrain

Looking at the schedule I’m as always eager to attend many more sessions than I can; until I get more replicas of myself, It’s again down to choosing between multiple prominent talks at each time slot.

Thank you to all those who submitted a proposal! (It’s cool, just saying)

Birds of a Feather, Lightning Talks

Call for papers continues! You are encouraged to submit your proposals until end of January. These proposals are reviewed by the committee, and eventually chosen and scheduled by Giuseppe Maxia. See also:

Bash script: report largest InnoDB files

The following script will report the largest InnoDB tables under the data directory: schema, table & length in bytes. The tables could be non-partitioned, in which case this is simply the size of the corresponding .ibd file, or they can be partitioned, in which case the reported size is the sum of all partition files. It is assumed tables reside in their own tablespace files, i.e. created with innodb_file_per_table=1.

(
    mysql_datadir=$(grep datadir /etc/my.cnf | cut -d "=" -f 2)
    cd $mysql_datadir
    for frm_file in $(find . -name "*.frm")
    do
        tbl_file=${frm_file//.frm/.ibd}
        table_schema=$(echo $frm_file | cut -d "/" -f 2)
        table_name=$(echo $frm_file | cut -d "/" -f 3 | cut -d "." -f 1)
        if [ -f $tbl_file ]
        then
            # unpartitioned table
            file_size=$(du -cb $tbl_file 2> /dev/null | tail -n 1) 
        else
            # attempt partitioned innodb table
            tbl_file_partitioned=${frm_file//.frm/#*.ibd}
            file_size=$(du -cb $tbl_file_partitioned 2> /dev/null | tail -n 1)
        fi
        file_size=${file_size//total/}
        # Replace the below with whatever action you want to take,
        # for example, push the values into graphite.
        echo $file_size $table_schema $table_name
    done
) | sort -k 1 -nr | head -n 20

We use this to push table statistics to our graphite service; we keep an eye on table growth (we actually do not limit to top 20 but just monitor them all). File size does not report the real table data size (this can be smaller due to tablespace fragmentation). It does give the correct information if you’re concerned about disk space. For table data we also monitor SHOW TABLE STATUS / INFORMATION_SCHEMA.TABLES, themselves being inaccurate. Gotta go by something.

Why a professional conference must have a committee, and what that committee does

What exactly is it that a conference committee does? This post comes as response to a comment on A sneak peek at the Percona Live MySQL Conference & Expo 2014, reading:

Why the same committee each year? Community should vote on proposals and committee should just work schedule,etc.

I’ll pick the glove and shed some light into the work of the committee. While this specific comment related to the Percona Live conference, I trust that my opinions expressed below apply just as well to any (technical?) professional conference; the point below can equally apply to conferences from Oracle MySQL Connect, O’Reilly Velocity to FOSDEM & PyCon.

I can sum up the entire answer with one word: “Discussion”. For a breakdown, please read through.

First, what’s not feasible with community-based voting, and what looks very wrong

So why not open up a voting system and let the community do the rating? I always disliked the “send an SMS to this number to vote for X” approach. It is so unbalanced and unreliable: if I were to submit a proposal describing how my company invented/develops/uses X to do great things, I can expect my co-workers to vote for me. In fact, my company would possibly ask my co-workers to do so. I stand a better chance if I work in a large company; less so in a small company.

Anonymous votes tend to be touched by politics. I could vote for my company, against a competing product, for my friends, against people I dislike, and none the wiser. We can take away anonymity, which means my votes will be public, which means they are visible to all. In which case my ranking will be affected by what people I rate would think of me; which means my rating would not be based on strictly professional/technical grounds.

But before we drop into this endless pit, let’s consider: will I, as a KMyPyVelocirails community member, really engage in reviewing over 300 submissions? How many members of my community would take so many hours of their time to do so? Let me clarify, this is a part-time job. It requires time, and it requires a mindset. I’m guessing here that you cannot count on everyone rating all talks. Some more prominent talks will be reviewed by more people, others may be left little to not reviewed in the first place.

The idea of a purely community based rating is romantic and beautiful, but not feasible.

And then there’s the discussion. Let’s look at some of the things the committee is engaged in to clarify.

Duties, responsibility and actions of a conference committee

The following discussion cannot be an exhaustive description of a committee’s work, but it can give a good glimpse into its scope. We begin with the commitment the members take upon themselves: to invest their time and will in the committee’s duties. Once you join in, you are expected to work and deliver. Continue reading » “Why a professional conference must have a committee, and what that committee does”