From https://petabridge.com/cluster/lesson3 - For instance, what happens to our Akka.NET cluster when we abruptly kill one of our Docker containers? Or if we try rolling out an update while the cluster is still running? Are these the desired behaviors of our cluster, or unplanned behaviors?
Vector Search -An Introduction in Oracle Database 23ai.pptx
Akka.NET Best Practices for Continuous Deployment w/ Akka.Cluster
1. Copyright & Sharing
• These slides are intended for your
personal use only and are the copyrighted
material of Petabridge, LLC.
• You agree to not to sell, redistribute,
share, or monetize this content in any way
without the express written permission of
Petabridge, LLC.
4. Reachability vs. Membership
• Nodes can still be “up” members, but
“unavailable” inside cluster.
• Akka.Cluster assumes, by default, that
nodes that are unavailable temporarily.
6. Split Brain Resolvers
• Built-in, configurable algorithms that can
automatically DOWN unreachable nodes.
• Smart enough to not make your cluster
brittle.
• Runs concurrently in “winning” and “losing”
side of network partition.
• Guarantees single, unified cluster at end
of execution.
18. Why Google Protocol Buffers?
• Serialization doesn’t require reflection
(speed)
• Compresses extremely well (size)
• Wire schema is decoupled from how
objects are represented inside your
application (s.o.c.)
• Naturally version-tolerant and
backwards/forwards compatible (stable)
• Platform-independent.
Keep in mind, that:
When the oldest node will get partitioned from others, it will be downed itself and the next oldest one will pick up its role. This is possible thanks to down-if-alone setting.
If down-if-alone option will be set to off, a whole cluster will be dependent on the availability of this single node. `down-if-alone` is on by default.
There is a risk, that if partition will split cluster into two unequal parts i.e. 2 nodes with the oldest one present and 20 remaining ones, the majority of the cluster will go down.
Since the oldest node is determined on the latest known state of the cluster, there is a small risk that during partition, two parts of the cluster will both consider themselves having the oldest member on their side. While this is very rare situation, you still may end up having two independent clusters after split occurrence.