Three wishes for a new year

September 23, 2014

Another new year by Jewish calendar. What do I wish for the following year?

  1. World peace
  2. Good health to all
  3. Multi-core execution for queries

After having evaluated a few columnar databases, and having seen how a single query gets 24 cores busy, I can't look at MySQL the same way again. The fact that a single query consumes a single core only doesn't seem right in the year 2014. Shard-query is a cool application-level attempt to solve the above; I would like to see stuff like this implemented inside the server (or inside the storage engine where possible).

None of my wishes in previous years [2010], [2011], [2012], [2013] came true (and mostly gone worse). I'm still willing to settle for two out of three.

zookeepercli: lightweight, powerful, controlled command line client for ZooKeeper

September 17, 2014

I'm happy to announce the availability of zookeepercli: a lightweight, simple, fast and controlled command line client for ZooKeeper.

zookeepercli allows for:

  • Basic CRUD-like operations: createsetdeleteexistsgetls (aka children).
  • Extended operations: lsr (ls recursive),  creater (create recursively)
  • Well formatted and controlled output: supporting either txt or json format
  • Single, no-dependencies binary file, based on a native Go ZooKeeper library by github.com/samuel/go-zookeeper (LICENSE)

I was dissatisfied with existing command line access to ZooKeeper. Uncontrolled and noisy output as well as large footprint were among the reasons. zookeepercli overcomes the above and provides with often required powers.

Usage samples:


$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only "path placeholder"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key1 "value1"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key2 "value2"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key3 "value3"

$ zookeepercli --servers srv-1,srv-2,srv-3 -c ls /demo_only
key3
key2
key1

# Same as above, JSON format output:
$ zookeepercli --servers srv-1,srv-2,srv-3 --format=json -c ls /demo_only
["key3","key2","key1"]

$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key1
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key2
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key3
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only

# Create a path recursively (auto-generate parent directories if not exist):
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c creater "/demo_only/child/key1" "val1"
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c creater "/demo_only/child/key2" "val2"

$ zookeepercli --servers=srv-1,srv-2,srv-3 -c get "/demo_only/child/key1"
val1

# This path was auto generated due to recursive create:
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c get "/demo_only" 
zookeepercli auto-generated

# ls recursively a path and all sub children:
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c lsr "/demo_only" 
child
child/key1
child/key2 

zookeepercli is released as open source by Outbrain under the Apache 2.0 license.

Quick links:

Orchestrator 1.0.5: refactoring masters, multi-master replication

August 6, 2014

Outbrain's orchestrator Version 1.0.5 is released.

Quick links: Orchestrator Manual, FAQ, Downloads

Orchestrator now supports refactoring of masters via master-master topologies. It now allows promoting slaves as co-masters and detachment of instances from a co-master topology, effectively allowing for replacing an active master.

Like this

Drag a master:

orchestator-cm-simple-drag-master-01

Onto one of its slaves: Continue Reading »

Orchestrator 1.0.4 released

July 29, 2014

Outbrain's orchestrator Version 1.0.4 is released.

Quick links: Orchestrator Manual, FAQ, Downloads

What's new?

Co-masters

orchestrator now does a much better visualization of Master-Master replication:

orchestrator-co-masters

Continue Reading »

Some MySQL security tips

July 28, 2014

This is a brief list of security tips for MySQL. It is by no means complete.

  • Follow the sudo example. Don't let all you DBAs and Ops have the password for the root account. Have each and every one of them have their own personal super-duper account, with their own personal and private password. This makes it so easy when someone leaves the company. No need to change passwords, just to remove the employee's account.
  • Block root. Either remove it completely or forbid it from logging in. Yes, there's a way hack in MySQL to have a valid account blocked from logging in. One way of making this happen is via common_schema's sql_accounts. Here's how to block root account using common_schema:
mysql> CALL common_schema.eval("SELECT sql_block_account FROM sql_accounts WHERE USER = 'root'");
  • Make lots of small users. Give nagios its own user. Give collectd its own user. Give orchestrator its own user. Give innotop its own user. Give whatever its own user. Yes, it's more users to create, but you get to have each user as limited in privileges as possible, and you don't get to wonder why your heartbeat script has SUPER, LOCK and SHUTDOWN privileges.
  • Verify: set @@old_passwords=0; before setting a new password. Make sure your configuration does not specify old_passwords = 1. There is no reason to use "old passwords". In fact, a 5.6 client will refuse connecting with an "old password".
  • Give no access to mysql.*. No one should be tampering directly with the mysql system tables.
  • Run oak-security-audit or, if you have common_schema installed (you mean you don't?), just CALL security_audit(); I can (almost) guarantee you'd be surprised and thankful for the security breakdown. Users without passwords, users sharing same passwords, users with unreasonable privileges, and more... You'll see them all.
  • If you have web interfaces to your database or some of its aspects (e.g. Anemometer, Propagator, Orchestrator, monitoring, ...), protect it via LDAP group or similar. Not everyone who has access to your network needs to see you database. Neither does every employee.

Introducing Orchestrator: manage and visualize your MySQL replication topologies and get home for dinner

June 9, 2014

I'm happy to announce the availability of Outbrain's Orchestrator: MySQL replication management & visualization tool.

orchestrator - simple topology

  • Orchestrator reads your replication topologies (give it one server - be it master or slave - in each topology, and it will reveal the rest).
  • It keeps a state of this topology.
  • It can continuously poll your servers to get an up to date topology map.
  • It visualizes the topology in a clear and slick D3 tree.
  • It allows you to modify your topology; move slaves around. You can use the command line variation, the JSON API, or you can use the web interface.

Quick links: Orchestrator Manual, FAQ, Downloads

Nothing like nice screenshots

To move slaves around the topology (repoint a slave to a different master) through orchestrator's web interface, we use Drag and Drop, Continue Reading »

Using deep nested replication topologies

June 2, 2014

If you're running more than a few slaves in a replication topology, you might choose to use deeply nested replication: slaves replicating from other slaves at 2, 3 or even 4 levels. There are pros and cons to such topologies, discussed below.

A simple, small deep nested topology is depicted below (it is also a real production topology of ours):

topology-sample-02

Two slaves, srv-4 and srv-8 act as local masters to yet other slaves. Why would we want to have this complexity?

Pros

  • Reduce load on master: too many slaves replicating from single master means the master becomes loaded with serving the binary logs. Typically, when all slaves are up to date, this isn't a big deal, since they all get served roughly the same entries, and caching works great. If not all are in sync, the master needs to look up different log entries and pays with more disk I/O.
  • Reduce network load. We serve from three different data centres. Our master is in one DC, and our slaves are spread through all three. Inter-DC network is naturally slower; it is also more expensive, hence more easily saturated. Reducing cross-DC network is done across all our systems, including MySQL. srv-4, for example, could depict a slave that is a local master in its own DC, serving srv-5, srv-6, srv-7 all in the same DC, hence only using cross-DC network for one slave instead of four. A bit over-simplistic example but true.
  • Failover. MHA does a good job at synchronizing slaves of same master by figuring out the missing binary log entries for each slave. It should do well within a single region, but I do not know that it would do the same cross region (I'm assuming the binlog entries copy should work, but I haven't tried it cross region). In case of a disaster such as an entire DC going down (we actually had such a case a couple weeks ago; power went out for the entire DC), we have a designated master into which we can fail over in each other DC, and which contains enough slaves (from each remaining DCs) to keep serving. That it, we're willing to skip the fancy syncing and just point to a newly promoted master, with the benefit that the entire replication topology under it is intact.
  • Testing & upgrades. For example, I might want to upgrade to 5.6. Upgrading a slave from 5.5 to 5.6 is a good start; we look at replication and see that nothing gets broken. But how will our production master behave with 5.6? Put some more slaves under your newly upgraded 5.6 server and get a clearer picture. At some stage you might just promote this entire subtree as the new topology.

Here's another topology; DC info is not depicted in this image, but you can guess what designated masters we have: Continue Reading »

From Percona Server to MySQL and back to Percona Server: beware of crash safe replication info

May 27, 2014

We're migrating some of our "vanilla" MySQL 5.5 servers to Percona Server 5.5. One of the major incentives is the crash-safe replication feature, allowing slaves to die (power failure) and resume replication without losing position in relay logs.

Whether or not we will migrate all our servers depends on further benchmarking; so far we've noticed unexpected results, but these are still premature to publish.

However the fact that we are using both MySQL & Percona Server has led us into a peculiar situation which I'd like to share. We reseed our servers via LVM snapshots. If we need a new machine, or have a corrupted slave, we capture an image of a running slave and duplicate it, a process which takes the better part of a day. This duplicates not only the data, of course, but also the relay logs, the relay-log.info file, master.info file, implying the position within the topology.

With crash safe replication this also means the transactional relay log position. Recap: crash safe replication writes, per transaction, the relay log status into ibdata1 file. So the relay log info in ibdata1 is in perfect alignment with your committed transactions. Upon server startup, Percona Server reads the info from ibdata1 and overwrites relay-log.info file (it completely disregards whatever was in that file prior to startup).

Can you guess what could get wrong here? Here's the scenario we had; the same problem can unfold in different scenarios.

Take a look at the following topology:

simple topology

(this image is an actual online visualization of a replication topology; for purposes of this blog it's a sandbox topology on my laptop. Please stand by for some very cool open source release announcement shortly)

Continue Reading »

When you cannot replicate from master M to slave S

April 28, 2014

Working on some replication topology automation, here are some rules that will prevent you from replicating from a MySQL server M to a slave S:

  • M does not have binary logs (log_bin) enabled
  • M is itself a slave and does not have log_slave_updates enabled
  • M has a higher major version than S (e.g. M is 5.6 and S is 5.5, but do note log_bin_use_v1_row_events)
  • Both servers have same server_id
  • S has log_bin & log_slave_updates, uses STATEMENT binlog_format, and M uses ROW or MIXED binlog_format
  • S has log_bin & log_slave_updates, uses MIXED binlog_format, and M uses ROW binlog_format
  • S and M are using different replication filters (some rules could work; good luck if you're that adventurous)

[EDIT: the above is configuration-wise]

Did I miss anything? Please comment below.

The mystery of MySQL 5.6 excessive buffer pool flushing

April 20, 2014

I'm experimenting with upgrading to MySQL 5.6 and am experiencing an unexplained increase in disk I/O utilization. After discussing this with several people I'm publishing in the hope that someone has an enlightenment on this.

We have a few dozens servers in a normal replication topology. On this particular replication topology we've already evaluated that STATEMENT based replication is faster than ROW based replication, and so we use SBR. We have two different workloads on our slaves, applied by two different HAProxy groups, on three different data centres. Hardware-wise, servers of two groups use either Virident SSD cards or normal SAS spindle disks.

Our servers are I/O bound. A common query used by both workloads looks up data that does not necessarily have a hotspot, and is very large in volume. DML is low, and we only have a few hundred statements per second executed on master (and propagated through replication).

We have upgraded 6 servers from all datacenters to 5.6, both on SSD and spindle disks, and are experiencing the following phenomena: Continue Reading »

 
Powered by Wordpress and MySQL. Theme by openark.org