Skip to main content

30 posts tagged with "availability"

View All Tags

Old-Clients

3 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

It has been awhile since the last post, I'm happy to be back.

In today's chaos day we want to verify the hypothesis from zeebe-chaos#34 that old clients can't disrupt a running cluster.

It might happen that after upgrading your Zeebe to the newest shiny version, you might forget to update some of your workers or starters etc. This should normally not an issue since Zeebe is backwards compatible, client wise since 1.x. But what happens when older clients are used. Old clients should not have a negative effect on a running cluster.

TLDR Older clients (0.26) have no negative impact on a running cluster (1.2), and clients after 1.x are still working with the latest version.

Slow Network

6 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

On a previous Chaos Day we played around with ToxiProxy , which allows injecting failures on the network level. For example dropping packages, causing latency etc.

Last week @Deepthi mentioned to me that we can do similar things with tc, which is a built-in linux command. Today I wanted to experiment with latency between leader and followers using tc.

TL;DR; The experiment failed; With adding 100ms network delay to the Leader we broke the complete processing throughput. 馃挜

Corrupted Snapshot Experiment Investigation

8 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

A while ago we have written an experiment, which should verify that followers are not able to become leader, if they have a corrupted snapshot. You can find that specific experiment here. This experiment was executed regularly against Production-M and Production-S Camunda Cloud cluster plans. With the latest changes, in the upcoming 1.0 release, we changed some behavior in regard to detect snapshot corruption on followers.

NEW If a follower is restarted and has a corrupted snapshot it will detect it on bootstrap and will refuse to start related services and crash. This means the pod will end in a crash loop, until this is manually fixed.

OLD The follower only detects the corrupted snapshot on becoming leader when opening the database. On the restart of a follower this will not be detected.

The behavior change caused to fail our automated chaos experiments, since we corrupt the snapshot on followers and on a later experiment we restart followers. For this reason we had to disable the execution of the snapshot corruption experiment, see related issue zeebe-io/zeebe-cluster-testbench#303.

In this chaos day we wanted to investigate whether we can improve the experiment and bring it back. For reference, I also opened a issue to discuss the current corruption detection approach zeebe#6907

Fault-tolerant processing of process instances

6 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

Today I wanted to add another chaos experiment, to increase our automated chaos experiments collection. This time we will deploy a process model (with timer start event), restart a node and complete the process instance via zbctl.

TL;DR;

I was able to create the chaos toolkit experiment. It shows us that we are able to restore our state after fail over, which means we can trigger timer start events to create process instances even if they have been deployed before fail-over. Plus we are able to complete these instances.

Deployment Distribution

11 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

On this chaos day we wanted to experiment a bit with deployment's and there distribution.

We run a chaos experiment with deploying multiple workflows and disconnecting two leaders. We verified whether deployments are distributed later. The chaos experiment was successful and showed a bit how fault tolerant deployment distribution is. 馃挭

Network partitions

8 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

As you can see, I migrated the old chaos day summaries to github pages, for better readability. I always wanted to play around with github pages and jekyll so this was a good opportunity. I hope you like it. 馃槃

On the last Chaos Day, we experimented with disconnecting a Leader and one follower. We expected no bigger disturbance, since we still have quorum and can process records. Today I want to experiment with bigger network partitions.

  • In the first chaos experiment: I had a cluster of 5 nodes and split that into two groups, the processing continued as expected, since we had still quorum. 馃挭
  • In the second chaos experiment: I split the cluster again into two groups, but this time we added one follower of the bigger group to the smaller group after snapshot was taken and compaction was done. The smaller group needed to keep up with the newer state, before new processing can be started again, but everything worked fine.

Disconnect Leader and one Follower

8 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

Happy new year everyone 馃帀

This time I wanted to verify the following hypothesis Disconnecting Leader and one Follower should not make cluster disruptive (#45). But in order to do that we need to extract the Leader and Follower node for a partition from the Topology. Luckily in December we got an external contribution which allows us to print zbctl status as json. This gives us now more possibilities, since we can extract values much better out of it.

TL;DR The experiment was successful 馃憤

Message Correlation after Failover

4 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

Today I wanted to finally implement an experiment which I postponed for long time, see #24. The problem was that previous we were not able to determine on which partition the message was published, so we were not able to assert that it was published on the correct partition. With this #4794 it is now possible, which was btw an community contribution. 馃帀

Many Job Timeouts

4 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

In the last game day (on friday 06.11.2020) I wanted to test whether we can break a partition if many messages time out at the same time. What I did was I send many many messages with a decreasing TTL, which all targeting a specific point in time, such that they will all timeout at the same time. I expected that if this happens that the processor will try to time out all at once and break because the batch is to big. Fortunately this didn't happen, the processor was able to handle this.

I wanted to verify the same with job time out's.