Announcing orchestrator-agent

orchestrator-agent is a side-kick, complementary project of orchestrator, implementing a daemon service on one’s MySQL hosts which communicates with and accepts commands from orchestrator, built with the original purpose of providing an automated solution for provisioning new or corrupted slaves.

It was built by Outbrain, with Outbrain’s specific use case in mind. While we release it as open source, only a small part of its functionality will appeal to the public (this is why it’s not strictly part of the orchestrator project, which is a general purpose, wide-audience solution). Nevertheless, it is a simple implementation of a daemon, such that can be easily extended by the community. The project is open for pull-requests!

A quick breakdown of orchestrator-agent is as follows:

  • Executes as a daemon on linux hosts
  • Interacts and invokes OS commands (via bash)
  • Does not directly interact with a MySQL server running on that host (does not connect via mysql credentials)
  • Expects a single MySQL service on host
  • Can control the MySQL service (e.g. stop, start)
  • Is familiar with LVM layer on host
  • Can take LVM snapshots, mount snapshots, remove snapshots
  • Is familiar with the MySQL data directory, disk usage, file system
  • Can send snapshot data from a mounted snapshot on a running MySQL host
  • Can prepare data directory and receive snapshot data from another host
  • Recognizes local/remote datacenters
  • Controlled by orchestrator, two orchestrator-agents implement an automated and audited solution for seeding a new/corrupted MySQL host based on a running server.

Continue reading » “Announcing orchestrator-agent”

Get per-object grants via common_schema

Did you know common_schema supports a complete breakdown of all accounts on your database server? It can provide you with the GRANT statements required to set up an account, the REVOKE statements to undo the former, and this can be broken down on a per-object & per-object-type basis. Consider the sql_grants view:

Find who has privileges on a table called t:

select * from common_schema.sql_grants where priv_level_name='table' and object_name='t'\G

           GRANTEE: 'power1'@'localhost'
              user: power1
              host: localhost
        priv_level: `mytst`.`t`
   priv_level_name: table
     object_schema: mytst
       object_name: t
current_privileges: SELECT
      IS_GRANTABLE: NO
         sql_grant: GRANT SELECT ON `mytst`.`t` TO 'power1'@'localhost'
        sql_revoke: REVOKE SELECT ON `mytst`.`t` FROM 'power1'@'localhost'
     sql_drop_user: DROP USER 'power1'@'localhost'

or, who has privileges on the test schema:

select * from common_schema.sql_grants where priv_level_name='schema' and object_name='test' \G

           GRANTEE: 'power1'@'localhost'
              user: power1
              host: localhost
        priv_level: `test`.*
   priv_level_name: schema
     object_schema: NULL
       object_name: test
current_privileges: ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE
      IS_GRANTABLE: NO
         sql_grant: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* TO 'power1'@'localhost'
        sql_revoke: REVOKE ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* FROM 'power1'@'localhost'
     sql_drop_user: DROP USER 'power1'@'localhost'

           GRANTEE: 'test'@'localhost'
              user: test
              host: localhost
        priv_level: `test`.*
   priv_level_name: schema
     object_schema: NULL
       object_name: test
current_privileges: ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE
      IS_GRANTABLE: NO
         sql_grant: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* TO 'test'@'localhost'
        sql_revoke: REVOKE ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* FROM 'test'@'localhost'
     sql_drop_user: DROP USER 'test'@'localhost'

In the same manner, you can easily SELECT for all grants that are defined per-table, per-schema, … priv_level_name: is any one of ‘user’, ‘schema’, ‘table’, ‘column’, ‘routine’.

The above is a view, which aggregates data from all relevant INFORMATION_SCHEMA tables, normalizing and de-normalizing as necessary.

You might also want to look at sql_show_grants which provides with a no-breakdown GRANT for all accounts. It’s like a SHOW ALL GRANTS which doesn’t exist, plus it’s SELECTable.

Also, thanks to eval(), the above make for executable code. Consider: Continue reading » “Get per-object grants via common_schema”

Three wishes for a new year

Another new year by Jewish calendar. What do I wish for the following year?

  1. World peace
  2. Good health to all
  3. Multi-core execution for queries

After having evaluated a few columnar databases, and having seen how a single query gets 24 cores busy, I can’t look at MySQL the same way again. The fact that a single query consumes a single core only doesn’t seem right in the year 2014. Shard-query is a cool application-level attempt to solve the above; I would like to see stuff like this implemented inside the server (or inside the storage engine where possible).

None of my wishes in previous years [2010], [2011], [2012], [2013] came true (and mostly gone worse). I’m still willing to settle for two out of three.

Orchestrator 1.0.5: refactoring masters, multi-master replication

Outbrain’s orchestrator Version 1.0.5 is released.

Quick links: Orchestrator Manual, FAQ, Downloads

Orchestrator now supports refactoring of masters via master-master topologies. It now allows promoting slaves as co-masters and detachment of instances from a co-master topology, effectively allowing for replacing an active master.

Like this

Drag a master:

orchestator-cm-simple-drag-master-01

Onto one of its slaves: Continue reading » “Orchestrator 1.0.5: refactoring masters, multi-master replication”

Some MySQL security tips

This is a brief list of security tips for MySQL. It is by no means complete.

  • Follow the sudo example. Don’t let all you DBAs and Ops have the password for the root account. Have each and every one of them have their own personal super-duper account, with their own personal and private password. This makes it so easy when someone leaves the company. No need to change passwords, just to remove the employee’s account.
  • Block root. Either remove it completely or forbid it from logging in. Yes, there’s a way hack in MySQL to have a valid account blocked from logging in. One way of making this happen is via common_schema‘s sql_accounts. Here’s how to block root account using common_schema:
mysql> CALL common_schema.eval("SELECT sql_block_account FROM sql_accounts WHERE USER = 'root'");
  • Make lots of small users. Give nagios its own user. Give collectd its own user. Give orchestrator its own user. Give innotop its own user. Give whatever its own user. Yes, it’s more users to create, but you get to have each user as limited in privileges as possible, and you don’t get to wonder why your heartbeat script has SUPER, LOCK and SHUTDOWN privileges.
  • Verify: set @@old_passwords=0; before setting a new password. Make sure your configuration does not specify old_passwords = 1. There is no reason to use “old passwords“. In fact, a 5.6 client will refuse connecting with an “old password”.
  • Give no access to mysql.*. No one should be tampering directly with the mysql system tables.
  • Run oak-security-audit or, if you have common_schema installed (you mean you don’t?), just CALL security_audit(); I can (almost) guarantee you’d be surprised and thankful for the security breakdown. Users without passwords, users sharing same passwords, users with unreasonable privileges, and more… You’ll see them all.
  • If you have web interfaces to your database or some of its aspects (e.g. Anemometer, Propagator, Orchestrator, monitoring, …), protect it via LDAP group or similar. Not everyone who has access to your network needs to see you database. Neither does every employee.

Introducing Orchestrator: manage and visualize your MySQL replication topologies and get home for dinner

I’m happy to announce the availability of Outbrain‘s Orchestrator: MySQL replication management & visualization tool.

orchestrator - simple topology

  • Orchestrator reads your replication topologies (give it one server – be it master or slave – in each topology, and it will reveal the rest).
  • It keeps a state of this topology.
  • It can continuously poll your servers to get an up to date topology map.
  • It visualizes the topology in a clear and slick D3 tree.
  • It allows you to modify your topology; move slaves around. You can use the command line variation, the JSON API, or you can use the web interface.

Quick links: Orchestrator Manual, FAQ, Downloads

Nothing like nice screenshots

To move slaves around the topology (repoint a slave to a different master) through orchestrator‘s web interface, we use Drag and Drop, Continue reading » “Introducing Orchestrator: manage and visualize your MySQL replication topologies and get home for dinner”

Using deep nested replication topologies

If you’re running more than a few slaves in a replication topology, you might choose to use deeply nested replication: slaves replicating from other slaves at 2, 3 or even 4 levels. There are pros and cons to such topologies, discussed below.

A simple, small deep nested topology is depicted below (it is also a real production topology of ours):

topology-sample-02

Two slaves, srv-4 and srv-8 act as local masters to yet other slaves. Why would we want to have this complexity?

Pros

  • Reduce load on master: too many slaves replicating from single master means the master becomes loaded with serving the binary logs. Typically, when all slaves are up to date, this isn’t a big deal, since they all get served roughly the same entries, and caching works great. If not all are in sync, the master needs to look up different log entries and pays with more disk I/O.
  • Reduce network load. We serve from three different data centres. Our master is in one DC, and our slaves are spread through all three. Inter-DC network is naturally slower; it is also more expensive, hence more easily saturated. Reducing cross-DC network is done across all our systems, including MySQL. srv-4, for example, could depict a slave that is a local master in its own DC, serving srv-5, srv-6, srv-7 all in the same DC, hence only using cross-DC network for one slave instead of four. A bit over-simplistic example but true.
  • Failover. MHA does a good job at synchronizing slaves of same master by figuring out the missing binary log entries for each slave. It should do well within a single region, but I do not know that it would do the same cross region (I’m assuming the binlog entries copy should work, but I haven’t tried it cross region). In case of a disaster such as an entire DC going down (we actually had such a case a couple weeks ago; power went out for the entire DC), we have a designated master into which we can fail over in each other DC, and which contains enough slaves (from each remaining DCs) to keep serving. That it, we’re willing to skip the fancy syncing and just point to a newly promoted master, with the benefit that the entire replication topology under it is intact.
  • Testing & upgrades. For example, I might want to upgrade to 5.6. Upgrading a slave from 5.5 to 5.6 is a good start; we look at replication and see that nothing gets broken. But how will our production master behave with 5.6? Put some more slaves under your newly upgraded 5.6 server and get a clearer picture. At some stage you might just promote this entire subtree as the new topology.

Here’s another topology; DC info is not depicted in this image, but you can guess what designated masters we have: Continue reading » “Using deep nested replication topologies”

From Percona Server to MySQL and back to Percona Server: beware of crash safe replication info

We’re migrating some of our “vanilla” MySQL 5.5 servers to Percona Server 5.5. One of the major incentives is the crash-safe replication feature, allowing slaves to die (power failure) and resume replication without losing position in relay logs.

Whether or not we will migrate all our servers depends on further benchmarking; so far we’ve noticed unexpected results, but these are still premature to publish.

However the fact that we are using both MySQL & Percona Server has led us into a peculiar situation which I’d like to share. We reseed our servers via LVM snapshots. If we need a new machine, or have a corrupted slave, we capture an image of a running slave and duplicate it, a process which takes the better part of a day. This duplicates not only the data, of course, but also the relay logs, the relay-log.info file, master.info file, implying the position within the topology.

With crash safe replication this also means the transactional relay log position. Recap: crash safe replication writes, per transaction, the relay log status into ibdata1 file. So the relay log info in ibdata1 is in perfect alignment with your committed transactions. Upon server startup, Percona Server reads the info from ibdata1 and overwrites relay-log.info file (it completely disregards whatever was in that file prior to startup).

Can you guess what could get wrong here? Here’s the scenario we had; the same problem can unfold in different scenarios.

Take a look at the following topology:

simple topology

(this image is an actual online visualization of a replication topology; for purposes of this blog it’s a sandbox topology on my laptop. Please stand by for some very cool open source release announcement shortly)

Continue reading » “From Percona Server to MySQL and back to Percona Server: beware of crash safe replication info”

When you cannot replicate from master M to slave S

Working on some replication topology automation, here are some rules that will prevent you from replicating from a MySQL server M to a slave S:

  • M does not have binary logs (log_bin) enabled
  • M is itself a slave and does not have log_slave_updates enabled
  • M has a higher major version than S (e.g. M is 5.6 and S is 5.5, but do note log_bin_use_v1_row_events)
  • Both servers have same server_id
  • S has log_bin & log_slave_updates, uses STATEMENT binlog_format, and M uses ROW or MIXED binlog_format
  • S has log_bin & log_slave_updates, uses MIXED binlog_format, and M uses ROW binlog_format
  • S and M are using different replication filters (some rules could work; good luck if you’re that adventurous)

[EDIT: the above is configuration-wise]

Did I miss anything? Please comment below.