Skip to main content

Not produce duplicate Keys

· 6 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

Due to some incidents and critical bugs we observed in the last weeks, I wanted to spent some time to understand the issues better and experiment how we could detect them. One of the issue we have observed was that keys were generated more than once, so they were no longer unique (#8129). I will describe this property in the next section more in depth.

TL;DR; We were able to design an experiment which helps us to detect duplicated keys in the log. Further work should be done to automate such experiment and run it agains newer versions.

Throughput on big state

· 4 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

In this chaos day we wanted to prove the hypothesis that the throughput should not significantly change even if we have bigger state, see zeebe-chaos#64

This came up due observations from the last chaos day. We already had a bigger investigation here zeebe#7955.

TL;DR; We were not able to prove the hypothesis. Bigger state, more than 100k+ process instances in the state, seems to have an big impact on the processing throughput.

Recovery (Fail Over) time

· 5 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

In the last quarter we worked on a new "feature" which is called "building state on followers". In short, it means that the followers apply the events to build there state, which makes regular snapshot replication unnecessary and allows faster role transition between Follower-to-Leader. In this chaos day I wanted to experiment a bit with this property, we already did some benchmarks here. Today, I want to see how it behaves with larger state (bigger snapshots), since this needed to be copied in previous versions of Zeebe, and the broker had to replay more than with the newest version.

If you want to now more about build state on followers check out the ZEP

TL;DR; In our experiment we had almost no downtime, with version 1.2, the new leader was very fast able to pick up the next work (accept new commands).

Old-Clients

· 3 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

It has been awhile since the last post, I'm happy to be back.

In today's chaos day we want to verify the hypothesis from zeebe-chaos#34 that old clients can't disrupt a running cluster.

It might happen that after upgrading your Zeebe to the newest shiny version, you might forget to update some of your workers or starters etc. This should normally not an issue since Zeebe is backwards compatible, client wise since 1.x. But what happens when older clients are used. Old clients should not have a negative effect on a running cluster.

TLDR Older clients (0.26) have no negative impact on a running cluster (1.2), and clients after 1.x are still working with the latest version.

Slow Network

· 6 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

On a previous Chaos Day we played around with ToxiProxy , which allows injecting failures on the network level. For example dropping packages, causing latency etc.

Last week @Deepthi mentioned to me that we can do similar things with tc, which is a built-in linux command. Today I wanted to experiment with latency between leader and followers using tc.

TL;DR; The experiment failed; With adding 100ms network delay to the Leader we broke the complete processing throughput. 💥

Time travel Experiment

· 9 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

Recently we run a Game day where a lot of messages with high TTL have been stored in the state. This was based on an earlier incident, which we had seen in production. One suggested approach to resolve that incident was to increase the time, such that all messages are removed from the state. This and the fact that summer and winter time shifts can cause in other systems evil bugs, we wanted to find out how our system can handle time shifts. Phil joined me as participant and observer. There was a related issue which covers this topic as well, zeebe-chaos#3.

TL;DR; Zeebe is able to handle time shifts back and forth, without observable issues. Operate seems to dislike it.

Corrupted Snapshot Experiment Investigation

· 8 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

A while ago we have written an experiment, which should verify that followers are not able to become leader, if they have a corrupted snapshot. You can find that specific experiment here. This experiment was executed regularly against Production-M and Production-S Camunda Cloud cluster plans. With the latest changes, in the upcoming 1.0 release, we changed some behavior in regard to detect snapshot corruption on followers.

NEW If a follower is restarted and has a corrupted snapshot it will detect it on bootstrap and will refuse to start related services and crash. This means the pod will end in a crash loop, until this is manually fixed.

OLD The follower only detects the corrupted snapshot on becoming leader when opening the database. On the restart of a follower this will not be detected.

The behavior change caused to fail our automated chaos experiments, since we corrupt the snapshot on followers and on a later experiment we restart followers. For this reason we had to disable the execution of the snapshot corruption experiment, see related issue zeebe-io/zeebe-cluster-testbench#303.

In this chaos day we wanted to investigate whether we can improve the experiment and bring it back. For reference, I also opened a issue to discuss the current corruption detection approach zeebe#6907

BPMN meets Chaos Engineering

· 8 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

On the first of April (2021) we ran our Spring Hackday at Camunda. This is an event where the developers at camunda come together to work on projects they like or on new ideas/approaches they want to try out. This time we (Philipp and me) wanted to orchestrate our Chaos Experiments with BPMN. If you already know how we automated our chaos experiments before, you can skip the next section and jump directly to the Hackday Project section.

In order to understand this blogpost make sure that you have a little understanding of Zeebe, Camunda Cloud and Chaos Engineering. Read the following resources to get a better understanding.

Set file immutable

· 7 min read
Christopher Kujawa
Chaos Engineer @ Zeebe

This chaos day was a bit different. Actually I wanted to experiment again with camunda cloud and verify that our high load chaos experiments are now working with the newest cluster plans, see zeebe-cluster-testbench#135. Unfortunately I found out that our test chaos cluster was in a way broken, that we were not able to create new clusters. Luckily this was fixed at the end of the day, thanks to @immi :)

Because of these circumstances I thought about different things to experiment with, and I remembered that in the last chaos day we worked with patching running deployments, in order to add more capabilities. This allowed us to create ip routes and experiment with the zeebe deployment distribution. During this I have read the capabilities list of linux, and found out that we can mark files as immutable, which might be interesting for a chaos experiment.

In this chaos day I planned to find out how marking a file immutable affects our brokers and I made the hypothesis that: If a leader has a write error, which is not recoverable, it will step down and another leader should take over. I put this in our hypothesis backlog (zeebe-chaos#52).

In order to really run this kind of experiment I need to find out whether marking a file immutable will cause any problems and if not how I can cause write errors such that affects the broker. Unfortunately it turned out that immutable files will not cause issues on already opened file channels, but I found some other bugs/issues, which you can read below.

In the next chaos days I will search for a way to cause write errors proactively, so we can verify that our system can handle such issues.