orchestrator 3.0.3 is released! There’s been a lot going on since 3.0.2:
orchestrator/raft: auto-provisioning nodes via lightweight snaphsots
In an orchestrator/raft setup, we have n hosts forming a raft
cluster. In a 3-node setup, for example, one node can go down, and still the remaining two will form a consensus, keeping the service operational. What happens when the failed node returns?
With 3.0.3
the failed node can go down for as long as it wants. Once it comes back, it attempts to join the raft cluster. A node keeps its own snapshots and its raft log outside the relational backend DB. If it has recent-enough data, it just needs to catch up with raft replication log, which is acquires from one of the active nodes.
If its data is very stale, it will request a snapshot from an active node, which it will import, and will just resume from that point.
If its data is gone, that’s not a problem. It gets a snapshot from an active node, improts it, and keeps running from that point.
If it’s a newly provisioned box, that’s not a problem. It gets a snapshot from an active node, … etc.
- SQLite backed setups can just bootstrap new nodes. No need to dump+load or import any data.
- Side effect: you may actually use
:memory:
, where SQLite does not persist any data to disk. Remember that the raft snapshots and replication log will cover you. The cheat is that the raft replication log itself is managed and persisted by an independent SQLite database.
- Side effect: you may actually use
- MySQL backed setups will still need to make sure
orchestrator
has the privileges to deploy itself.
More info in the docs.
This plays very nicely into the hands of kubernetes
, which is on orchestrator
‘s roadmap.
Key Value, native Consul support (Zk TODO)
orchestrator
now supports Key-Value stores built-in, and Consul in particular.
At this time the purpose of orchestrator KV is to support master discovery. orchestrator
will write the identity of the master of each cluster to KV store. The user will use that information to apply changes to their infrastructure.
For example, the user will rely on Consul KV entries, written by orchestrator
, to generate proxy config files via consul-template
, such that traffic is directed via the proxy onto the correct master.
orchestrator
supports:
- Manually writing identity of cluster’s master to KV store
- e.g. `
orchestrator-client -c submit-masters-to-kv-stores -alias mycluster
`
- e.g. `
- Automatically updating master’s identify upon failover
Key-value pairs are in the form of <cluster-alias>
–<master>
. For example:
- Key is `main_cluster`
- Value is
my-db-0123.my.company.com:3306
Web UI improvements
Using the web UI, you can now: Continue reading » “orchestrator 3.0.3: auto provisioning raft nodes, native Consul support and more”