Impressions from MySQL conf 2011, part IV

This post concludes my impressions from some of the talks I’ve been to.

Thursday

I opened this day’s sessions with a smile.

Antony Curtis and Arjen Lentz have authored the OQGraph storage engine a while back. I had every intention to try it out, but never got round to it. Which is why I was happy to find this session in the conference. OQGraph is a specialized engine, and comes to solve hierarchal or otherwise graph-related queries, such as: “who are the descendants of a given node”, “find a route from a to b”, etc. MySQL does not support the RECURSIVE syntax as Oracle does, and does not provide out-of-the-box solution for that.

The syntax is just excellent: you just write something like SELECT * FROM my_graph WHERE from_node = ’empusa fasciata’ AND to_node = ‘homo sapiens’ to find a route. Otherwise just use WHERE from_node = ‘Heathrow Central’ to find all outgoing links. So this is just plain old SQL, no new syntax involved.

I rounded corners. It is also possible (and required) to specify an algorithm. Do you want Djekstra? BFS? You specify it in the query. The result of a route query is a rowset, where each row is a step in the route, along with its sequence within the route. So you can do your ORDER BY, LIMIT etc. I find that syntax-wise, OQGraph is very intuitive!

Implementation-wise, there are two versions: 2 (stable) and 3 (testing), which are very different. In version 2 you must create and populate an OQGraph table. You must populate it with from-to;weight values. The entire table is stored in memory a-la MEMORY engine. In version 3 the OQGraph is not directly populated. Instead, you use the CONNECTION table property to identify a real table where the data resides, along with the names of the relevant columns. So your data can reside within your normal InnoDB table, and your queries will follow your normal isolation-level rules.

Graph search implementation is done via the boost() library. This means that information must be loaded in memory. Tests show that up to a million or a few million edges are as much as OQGraph can take. So right now it may not fit your requirements is you need larger datasets.

Antony seemed like he was enjoying every minute of writing OQGraph, was very aware of current limitations, and offered great ideas on improvement. The version 3 design is a huge improvement over version 2, and I can’t wait to see how it evolves!

An interesting view into Facebook’s attempt to locate & define data drifts within MySQL replication. Apparently, Facebook guys realized there were some inconsistencies between masters and slaves. They embarked on a mission to find out what exactly was inconsistent, and why.

Daniel Peek described their method of detecting changes. In similar manner to Maatkit’s mk-table-checksum, they use hash codes over chunks of rows. To be more specific: in order to detect the differences (if any) between master & slaves on a given table, they iterate said table in chunks, such that first line of any chunk overlaps with last line of previous chunk. This is easily done when there’s unique key, less so when there isn’t. They copy rows to a utility table a chunk at a time (e.g. 1,000 rows at a time), and take checksum. Their script then compared checksum on master & slaves. If it equals, they conclude (at very high probability) that tested chunk is identical. Otherwise they take action to detect the exact row that is changed.

When the two chunks do not have the same number of rows, then there’s at the very least a missing or extra row in either master or slave. Otherwise, there’s a change in column’s data. Apparently, nearly all data drifts are changes in column data, not missing or extra rows.

To make a long story short, they realized most of their data drift related to TIMESTAMPs. After realizing some timezone settings were incorrect, they were still left with 0.00056% (if my memory serves me right) worth of data drift. How often do they do this test? Once, is the reply. What is the meaning of 0.00056%? We don’t know, is the answer.

And, although we are left with unanswered questions: Why does that happen? How can we detect such changes in reasonable time? What should we do once we realize there is a drift? Whom shall we trust? We have gone to length to recognize a way to detect such drifts.

The conference had to end with a bang. In the last three years (hopefully with some sleep), Justin Swanhart has been working on what seems to me like an utterly (positively) crazy project: Flexviews: a materialized views solution for MySQL.

MySQL’s native views are immaterialized. There is no data in them. When you access the view, you directly query the tables beneath. Justin’s code allows you to create a materialized view (implemented as an actual MySQL table), which can utilize almost any type of SELECT you would want to use. It can only SELECT tables (including other materialized views).

How does this work? You don’t directly create your materialized view. You get a set of stored routines with which you create the view. Each provides with a simple step in the creation of the view (e.g. JOIN to this table, return this column, etc.). You don’t have to worry too much about this, since Justin provides with a PHP script which translates a SQL query to the required invocation sequence of said routines.

OK, view is created. What then? Flexviews reads binary logs (must be in ROW binlog format), and decides whether logged action manipulates tables used by any materialized table. If so, it logs that action into a special log table.

What have we got so far? SQL parser which generates sequence of stored routines; sequence of stored routines which creates a table, while storing structure; Binlog reader, capable of recognizing relevant events, associating them with relevant materialized views; and, most importantly, some brains which can figure out just how said event affects data. Think about it: say our query did some aggregation, with COUNT(*) or SUM(column). To be able to update the aggregated result based on the fact a few rows have been added/deleted/updated sounds to me like very hard work.

Flexviews do not get updated immediately, but rather on demand. It’s all in the logs, it’s just a matter of when to apply the logs. We had a very nice & simple demonstration of this.

I don’t mean to be too superlative, but this was a very impressive session, and myself, as well as other people, all had an occasional “wow” slip during and concluding this talk.

Throughout the week

  • Planet MySQL

During conference, Planet MySQL was swamped with various announcements. Little were technical announcements, like “X.X has been released”, and most were purely marketing announcements. This is to be expected; everyone wants to make a living. I think it is quite all right that for 4 days a year, the planet is mostly marketing stuff. I personally mostly ignore the planet during these days.

3 thoughts on “Impressions from MySQL conf 2011, part IV

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.