orchestrator 1.1.18: new features, support for orchestrator-agent

Outbrain‘s orchestrator 1.1.18 is released:

  • Support for orchestrator-agent (see announcement): agent pages, support for agent actions, initiation of seeds (provisioning of new/corrupted servers), auditing of seeds.
  • Clusters dashboard
  • Support for long query auditing
  • SSL
  • Proxy authentication (e.g. apache2 serving as reverse-proxy with LDAP)
  • User control
  • Better slave moving rules.

Quick links:

orchestrator now allows for seeding/provisioning of servers via orchestrator-agent. It communicates with agents executing on the MySQL hosts and coordinate transfer of data between them. orchestrator now supports invocation and auditing of seeding operations, and protects you from breaking your seeds. The orchestrator-agent is a solution to Outbrain’s specific use case, and may not appeal to the greater crowd. Nonetheless it is extendible and is released as open source. Continue reading » “orchestrator 1.1.18: new features, support for orchestrator-agent”

Announcing orchestrator-agent

orchestrator-agent is a side-kick, complementary project of orchestrator, implementing a daemon service on one’s MySQL hosts which communicates with and accepts commands from orchestrator, built with the original purpose of providing an automated solution for provisioning new or corrupted slaves.

It was built by Outbrain, with Outbrain’s specific use case in mind. While we release it as open source, only a small part of its functionality will appeal to the public (this is why it’s not strictly part of the orchestrator project, which is a general purpose, wide-audience solution). Nevertheless, it is a simple implementation of a daemon, such that can be easily extended by the community. The project is open for pull-requests!

A quick breakdown of orchestrator-agent is as follows:

  • Executes as a daemon on linux hosts
  • Interacts and invokes OS commands (via bash)
  • Does not directly interact with a MySQL server running on that host (does not connect via mysql credentials)
  • Expects a single MySQL service on host
  • Can control the MySQL service (e.g. stop, start)
  • Is familiar with LVM layer on host
  • Can take LVM snapshots, mount snapshots, remove snapshots
  • Is familiar with the MySQL data directory, disk usage, file system
  • Can send snapshot data from a mounted snapshot on a running MySQL host
  • Can prepare data directory and receive snapshot data from another host
  • Recognizes local/remote datacenters
  • Controlled by orchestrator, two orchestrator-agents implement an automated and audited solution for seeding a new/corrupted MySQL host based on a running server.

Continue reading » “Announcing orchestrator-agent”

Get per-object grants via common_schema

Did you know common_schema supports a complete breakdown of all accounts on your database server? It can provide you with the GRANT statements required to set up an account, the REVOKE statements to undo the former, and this can be broken down on a per-object & per-object-type basis. Consider the sql_grants view:

Find who has privileges on a table called t:

select * from common_schema.sql_grants where priv_level_name='table' and object_name='t'\G

           GRANTEE: 'power1'@'localhost'
              user: power1
              host: localhost
        priv_level: `mytst`.`t`
   priv_level_name: table
     object_schema: mytst
       object_name: t
current_privileges: SELECT
      IS_GRANTABLE: NO
         sql_grant: GRANT SELECT ON `mytst`.`t` TO 'power1'@'localhost'
        sql_revoke: REVOKE SELECT ON `mytst`.`t` FROM 'power1'@'localhost'
     sql_drop_user: DROP USER 'power1'@'localhost'

or, who has privileges on the test schema:

select * from common_schema.sql_grants where priv_level_name='schema' and object_name='test' \G

           GRANTEE: 'power1'@'localhost'
              user: power1
              host: localhost
        priv_level: `test`.*
   priv_level_name: schema
     object_schema: NULL
       object_name: test
current_privileges: ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE
      IS_GRANTABLE: NO
         sql_grant: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* TO 'power1'@'localhost'
        sql_revoke: REVOKE ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* FROM 'power1'@'localhost'
     sql_drop_user: DROP USER 'power1'@'localhost'

           GRANTEE: 'test'@'localhost'
              user: test
              host: localhost
        priv_level: `test`.*
   priv_level_name: schema
     object_schema: NULL
       object_name: test
current_privileges: ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE
      IS_GRANTABLE: NO
         sql_grant: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* TO 'test'@'localhost'
        sql_revoke: REVOKE ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `test`.* FROM 'test'@'localhost'
     sql_drop_user: DROP USER 'test'@'localhost'

In the same manner, you can easily SELECT for all grants that are defined per-table, per-schema, … priv_level_name: is any one of ‘user’, ‘schema’, ‘table’, ‘column’, ‘routine’.

The above is a view, which aggregates data from all relevant INFORMATION_SCHEMA tables, normalizing and de-normalizing as necessary.

You might also want to look at sql_show_grants which provides with a no-breakdown GRANT for all accounts. It’s like a SHOW ALL GRANTS which doesn’t exist, plus it’s SELECTable.

Also, thanks to eval(), the above make for executable code. Consider: Continue reading » “Get per-object grants via common_schema”

Three wishes for a new year

Another new year by Jewish calendar. What do I wish for the following year?

  1. World peace
  2. Good health to all
  3. Multi-core execution for queries

After having evaluated a few columnar databases, and having seen how a single query gets 24 cores busy, I can’t look at MySQL the same way again. The fact that a single query consumes a single core only doesn’t seem right in the year 2014. Shard-query is a cool application-level attempt to solve the above; I would like to see stuff like this implemented inside the server (or inside the storage engine where possible).

None of my wishes in previous years [2010], [2011], [2012], [2013] came true (and mostly gone worse). I’m still willing to settle for two out of three.

zookeepercli: lightweight, powerful, controlled command line client for ZooKeeper

I’m happy to announce the availability of zookeepercli: a lightweight, simple, fast and controlled command line client for ZooKeeper.

zookeepercli allows for:

  • Basic CRUD-like operations: createsetdeleteexistsgetls (aka children).
  • Extended operations: lsr (ls recursive),  creater (create recursively)
  • Well formatted and controlled output: supporting either txt or json format
  • Single, no-dependencies binary file, based on a native Go ZooKeeper library by github.com/samuel/go-zookeeper (LICENSE)

I was dissatisfied with existing command line access to ZooKeeper. Uncontrolled and noisy output as well as large footprint were among the reasons. zookeepercli overcomes the above and provides with often required powers.

Usage samples:


$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only "path placeholder"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key1 "value1"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key2 "value2"
$ zookeepercli --servers srv-1,srv-2,srv-3 -c create /demo_only/key3 "value3"

$ zookeepercli --servers srv-1,srv-2,srv-3 -c ls /demo_only
key3
key2
key1

# Same as above, JSON format output:
$ zookeepercli --servers srv-1,srv-2,srv-3 --format=json -c ls /demo_only
["key3","key2","key1"]

$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key1
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key2
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only/key3
$ zookeepercli --servers srv-1,srv-2,srv-3 -c delete /demo_only

# Create a path recursively (auto-generate parent directories if not exist):
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c creater "/demo_only/child/key1" "val1"
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c creater "/demo_only/child/key2" "val2"

$ zookeepercli --servers=srv-1,srv-2,srv-3 -c get "/demo_only/child/key1"
val1

# This path was auto generated due to recursive create:
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c get "/demo_only" 
zookeepercli auto-generated

# ls recursively a path and all sub children:
$ zookeepercli --servers=srv-1,srv-2,srv-3 -c lsr "/demo_only" 
child
child/key1
child/key2 

zookeepercli is released as open source by Outbrain under the Apache 2.0 license.

Quick links:

Orchestrator 1.0.5: refactoring masters, multi-master replication

Outbrain’s orchestrator Version 1.0.5 is released.

Quick links: Orchestrator Manual, FAQ, Downloads

Orchestrator now supports refactoring of masters via master-master topologies. It now allows promoting slaves as co-masters and detachment of instances from a co-master topology, effectively allowing for replacing an active master.

Like this

Drag a master:

orchestator-cm-simple-drag-master-01

Onto one of its slaves: Continue reading » “Orchestrator 1.0.5: refactoring masters, multi-master replication”

Some MySQL security tips

This is a brief list of security tips for MySQL. It is by no means complete.

  • Follow the sudo example. Don’t let all you DBAs and Ops have the password for the root account. Have each and every one of them have their own personal super-duper account, with their own personal and private password. This makes it so easy when someone leaves the company. No need to change passwords, just to remove the employee’s account.
  • Block root. Either remove it completely or forbid it from logging in. Yes, there’s a way hack in MySQL to have a valid account blocked from logging in. One way of making this happen is via common_schema‘s sql_accounts. Here’s how to block root account using common_schema:
mysql> CALL common_schema.eval("SELECT sql_block_account FROM sql_accounts WHERE USER = 'root'");
  • Make lots of small users. Give nagios its own user. Give collectd its own user. Give orchestrator its own user. Give innotop its own user. Give whatever its own user. Yes, it’s more users to create, but you get to have each user as limited in privileges as possible, and you don’t get to wonder why your heartbeat script has SUPER, LOCK and SHUTDOWN privileges.
  • Verify: set @@old_passwords=0; before setting a new password. Make sure your configuration does not specify old_passwords = 1. There is no reason to use “old passwords“. In fact, a 5.6 client will refuse connecting with an “old password”.
  • Give no access to mysql.*. No one should be tampering directly with the mysql system tables.
  • Run oak-security-audit or, if you have common_schema installed (you mean you don’t?), just CALL security_audit(); I can (almost) guarantee you’d be surprised and thankful for the security breakdown. Users without passwords, users sharing same passwords, users with unreasonable privileges, and more… You’ll see them all.
  • If you have web interfaces to your database or some of its aspects (e.g. Anemometer, Propagator, Orchestrator, monitoring, …), protect it via LDAP group or similar. Not everyone who has access to your network needs to see you database. Neither does every employee.

Introducing Orchestrator: manage and visualize your MySQL replication topologies and get home for dinner

I’m happy to announce the availability of Outbrain‘s Orchestrator: MySQL replication management & visualization tool.

orchestrator - simple topology

  • Orchestrator reads your replication topologies (give it one server – be it master or slave – in each topology, and it will reveal the rest).
  • It keeps a state of this topology.
  • It can continuously poll your servers to get an up to date topology map.
  • It visualizes the topology in a clear and slick D3 tree.
  • It allows you to modify your topology; move slaves around. You can use the command line variation, the JSON API, or you can use the web interface.

Quick links: Orchestrator Manual, FAQ, Downloads

Nothing like nice screenshots

To move slaves around the topology (repoint a slave to a different master) through orchestrator‘s web interface, we use Drag and Drop, Continue reading » “Introducing Orchestrator: manage and visualize your MySQL replication topologies and get home for dinner”

Using deep nested replication topologies

If you’re running more than a few slaves in a replication topology, you might choose to use deeply nested replication: slaves replicating from other slaves at 2, 3 or even 4 levels. There are pros and cons to such topologies, discussed below.

A simple, small deep nested topology is depicted below (it is also a real production topology of ours):

topology-sample-02

Two slaves, srv-4 and srv-8 act as local masters to yet other slaves. Why would we want to have this complexity?

Pros

  • Reduce load on master: too many slaves replicating from single master means the master becomes loaded with serving the binary logs. Typically, when all slaves are up to date, this isn’t a big deal, since they all get served roughly the same entries, and caching works great. If not all are in sync, the master needs to look up different log entries and pays with more disk I/O.
  • Reduce network load. We serve from three different data centres. Our master is in one DC, and our slaves are spread through all three. Inter-DC network is naturally slower; it is also more expensive, hence more easily saturated. Reducing cross-DC network is done across all our systems, including MySQL. srv-4, for example, could depict a slave that is a local master in its own DC, serving srv-5, srv-6, srv-7 all in the same DC, hence only using cross-DC network for one slave instead of four. A bit over-simplistic example but true.
  • Failover. MHA does a good job at synchronizing slaves of same master by figuring out the missing binary log entries for each slave. It should do well within a single region, but I do not know that it would do the same cross region (I’m assuming the binlog entries copy should work, but I haven’t tried it cross region). In case of a disaster such as an entire DC going down (we actually had such a case a couple weeks ago; power went out for the entire DC), we have a designated master into which we can fail over in each other DC, and which contains enough slaves (from each remaining DCs) to keep serving. That it, we’re willing to skip the fancy syncing and just point to a newly promoted master, with the benefit that the entire replication topology under it is intact.
  • Testing & upgrades. For example, I might want to upgrade to 5.6. Upgrading a slave from 5.5 to 5.6 is a good start; we look at replication and see that nothing gets broken. But how will our production master behave with 5.6? Put some more slaves under your newly upgraded 5.6 server and get a clearer picture. At some stage you might just promote this entire subtree as the new topology.

Here’s another topology; DC info is not depicted in this image, but you can guess what designated masters we have: Continue reading » “Using deep nested replication topologies”