Skip to main content

· 8 min read
Christopher Kujawa

In today's chaos day, we wanted to experiment with the gateway and resiliency of workers.

We have seen in recent weeks some issues within our benchmarks when gateways have been restarted, see zeebe#11975.

We did a similar experiment in the past, today we want to focus on self-managed (benchmarks with our helm charts). Ideally, we can automate this as well soon.

Today Nicolas joined me on the chaos day 🎉

TL;DR; We were able to show that the workers (clients) can reconnect after a gateway is shutdown Furthermore, we have discovered a potential performance issue on lower load, which impacts process execution latency (zeebe#12311).

· 5 min read
Christopher Kujawa

Long time no see. Happy to do my first chaos day this year. In the last week have implemented interesting features, which I would like to experiment with. Batch processing was one of them.

TL;DR; Chaos experiment failed. 💥 Batch processing doesn't seem to respect the configured limit, which causes issues with processing and influences the health of the system. We found a bug 💪

· 10 min read
Christopher Kujawa

In the last weeks, we made several changes in our core components, we introduce some new abstractions, and changed how we communicate between partitions.

Due to these changes, we thought it might make sense to run some more chaos experiments in that direction and area since our benchmarks also recently found some interesting edge cases.

Today we experimented with Message Correlation and what happens when a network partition disturbs the correlation process.

TL;DR; The experiment was partially successful (after retry), we were able to publish messages during a network partition that have been correlated after the network partition. We need to verify whether we can also publish messages before a network partition and during the partition create the related instances.

· 10 min read
Christopher Kujawa

We encountered recently a severe bug zeebe#9877 and I was wondering why we haven't spotted it earlier, since we have chaos experiments for it. I realized two things:

  1. The experiments only check for parts of it (BPMN resource only). The production code has changed, and a new feature has been added (DMN) but the experiments/tests haven't been adjusted.
  2. More importantly we disabled the automated execution of the deployment distribution experiment because it was flaky due to a missing standalone gateway in Camunda Cloud SaaS zeebe-io/zeebe-chaos#61. This is no longer the case, see Standalone Gateway in CCSaaS

On this chaos day I want to bring the automation of this chaos experiment back to life. If I have still time I want to enhance the experiment.

TL;DR; The experiment still worked, and our deployment distribution is still resilient against network partitions. It also works with DMN resources. I can enable the experiment again, and we can close zeebe-io/zeebe-chaos#61. Unfortunately, we were not able to reproduce zeebe#9877 but we did some good preparation work for it.

· 4 min read
Christopher Kujawa

We recently introduced the Zeebe Standalone Gateway in CCSaaS. Today I wanted to do a first simple chaos experiment with the gateway, where we just restart one gateway.

Ideally in the future we could enable some gateway chaos experiments again, which we currently only support for helm.

TL;DR; Our Camunda Cloud clusters can handle gateway restarts without issues.

· 4 min read
Christopher Kujawa

Today we wanted to experiment with the snapshot interval and verify that a high snapshot frequency will not impact our availability (#21).

TL;DR; The chaos experiment succeeded 💪 We were able to prove our hypothesis.

· 6 min read
Christopher Kujawa

New Year;:tada:New Chaos🐒

This time I wanted to experiment with "big" variables. Zeebe supports a maxMessageSize of 4 MB, which is quite big. In general, it should be clear that using big variables will cause performance issues, but today I also want to find out whether the system can handle big variables (~1 MB) at all.

TL;DR; Our Chaos experiment failed! Zeebe and Camunda Cloud is not able to handle (per default) big variables (~1 MB) without issues.

· 3 min read
Christopher Kujawa

In this chaos day we experimented with the worker count, since we saw recently that it might affect the performance (throughput) negatively if there are more workers deployed. This is related to #7955 and #8244.

We wanted to prove, that even if we have more workers deployed the throughput of the process instance execution should not have an negative impact.

TL;DR; We were not able to prove our hypothesis. Scaling of workers can have a negative impact on performance. Check out the third chaos experiment.

· 6 min read
Christopher Kujawa

Due to some incidents and critical bugs we observed in the last weeks, I wanted to spent some time to understand the issues better and experiment how we could detect them. One of the issue we have observed was that keys were generated more than once, so they were no longer unique (#8129). I will describe this property in the next section more in depth.

TL;DR; We were able to design an experiment which helps us to detect duplicated keys in the log. Further work should be done to automate such experiment and run it agains newer versions.

· 4 min read
Christopher Kujawa

In this chaos day we wanted to prove the hypothesis that the throughput should not significantly change even if we have bigger state, see zeebe-chaos#64

This came up due observations from the last chaos day. We already had a bigger investigation here zeebe#7955.

TL;DR; We were not able to prove the hypothesis. Bigger state, more than 100k+ process instances in the state, seems to have an big impact on the processing throughput.