orchestrator 3.0.3: auto provisioning raft nodes, native Consul support and more

orchestrator 3.0.3 is released! There’s been a lot going on since 3.0.2:

orchestrator/raft: auto-provisioning nodes via lightweight snaphsots

In an orchestrator/raft setup, we have n hosts forming a raft cluster. In a 3-node setup, for example, one node can go down, and still the remaining two will form a consensus, keeping the service operational. What happens when the failed node returns?

With 3.0.3 the failed node can go down for as long as it wants. Once it comes back, it attempts to join the raft cluster. A node keeps its own snapshots and its raft log outside the relational backend DB. If it has recent-enough data, it just needs to catch up with raft replication log, which is acquires from one of the active nodes.

If its data is very stale, it will request a snapshot from an active node, which it will import, and will just resume from that point.

If its data is gone, that’s not a problem. It gets a snapshot from an active node, improts it, and keeps running from that point.

If it’s a newly provisioned box, that’s not a problem. It gets a snapshot from an active node, … etc.

  • SQLite backed setups can just bootstrap new nodes. No need to dump+load or import any data.
    • Side effect: you may actually use :memory:, where SQLite does not persist any data to disk. Remember that the raft snapshots and replication log will cover you. The cheat is that the raft replication log itself is managed and persisted by an independent SQLite database.
  • MySQL backed setups will still need to make sure orchestrator has the privileges to deploy itself.

More info in the docs.

This plays very nicely into the hands of kubernetes, which is on orchestrator‘s roadmap.

Key Value, native Consul support (Zk TODO)

orchestrator now supports Key-Value stores built-in, and Consul in particular.

At this time the purpose of orchestrator KV is to support master discovery. orchestrator will write the identity of the master of each cluster to KV store. The user will use that information to apply changes to their infrastructure.

For example, the user will rely on Consul KV entries, written by orchestrator, to generate proxy config files via consul-template, such that traffic is directed via the proxy onto the correct master.

orchestrator supports:

  • Manually writing identity of cluster’s master to KV store
    • e.g. `orchestrator-client -c submit-masters-to-kv-stores -alias mycluster`
  • Automatically updating master’s identify upon failover

Key-value pairs are in the form of <cluster-alias>&lt;master&gt;. For example:

  • Key is `main_cluster`
  • Value is my-db-0123.my.company.com:3306

Web UI improvements

Using the web UI, you can now:

  • Promote a new master

    graceful takeover via ui

    Dragging onto the left part of the master’s box implies promoting a new server. Dragging onto the right side of a master’s box means relocation a server below the master.

  • “reverse” replication (take local master)

    take master via UI

    Dragging onto the left part of a server’s local master implies taking over the master. Dragging onto the right part of a server’s local master implies relocating a server below that local master.

  • Work in quiet mode: click `mute` icon on the left sidebar to avoid being prompted when relocating replicas. You’ll still be prompted for risky operations such as master promotion.

Other noteworthy changes

  • Raft advertise addresses: a contribution by Sami Ahlroos allows orchestrator/raft to work over NAT, and `kubernetes` in particular.
  • Sparser histories: especially for the `orchestrator/raft` setup, but true in general, we wish to keep the `orchestrator` backend database lightweight. orchestrator will now keep less history than it used to.
    • Detection/recovery history is kept for 7 days
    • Encouraging general audit to go to log file instead of `audit` table.
  • Building via go1.9 which will soon become a requirement for developers wishing to build `orchestrator` on their own.

Roadmap

We’re looking to provision orchestrator on kubernetes, and will publish as much of that work as possible.

There’s many incoming feature requests from the community and we’ll try and address them where it makes sense and time allows. We greatly appreciate all input from the community!

Download

orchestrator is free and open source, released under the Apache 2 license.

Source & binary releases are available from the GitHub repository:

Packages are also available in package cloud.

One thought on “orchestrator 3.0.3: auto provisioning raft nodes, native Consul support and more

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.