Skip to main content

Building Confidence at Scale: How Camunda Ensures Platform Reliability Through Continuous Testing

· 8 min read
Christopher Kujawa
Principal Software Engineer @ Camunda

As businesses increasingly rely on process automation for their critical operations, the question of reliability becomes paramount. How can you trust that your automation platform will perform consistently under pressure, recover gracefully from failures, and maintain performance over time?

At Camunda, we've been asking ourselves these same questions for years, and today I want to share how our reliability testing practices have evolved to ensure our platform meets the demanding requirements of enterprise-scale deployments. I will also outline our plans to further invest in this crucial area.

From Humble Beginnings to Comprehensive Testing

Our reliability testing journey began in early 2019 with what we then called "benchmarks" – simple load tests to validate basic performance of our Zeebe engine.

Over time, we recognized that running such benchmarks alone wasn't enough. We needed to ensure that Zeebe could handle real-world conditions, including failures and long-term operation. This realization led us to significantly expand our testing approach.

We introduced endurance tests that run for weeks, simulating sustained load to uncover memory leaks and performance degradation. These tests helped us validate that Zeebe could maintain its performance characteristics over extended periods of time. Investing in these endurance tests paid off, as we identified and resolved several critical issues that only manifested under prolonged load. Additionally, it allows us to build up experience on what a healthy system looks like and what we need to investigate faulty systems. With this, we were able to create Grafana dashboards that we can directly use to monitor our production systems and provide to our customers.

We embraced chaos engineering principles, developing a suite of chaos experiments to simulate failures in a controlled manner. We created zbchaos, an open-source fault injection tool tailored for Camunda, allowing us to automate and scale our chaos experiments. Automated chaos experiments now run daily against all supported versions of Camunda, covering a wide range of failure scenarios.

Additionally, we run semi-regular manual "chaos days" where we design and execute new chaos experiments, documenting our findings in our chaos engineering blog.

What started as a straightforward performance validation tool has evolved into a comprehensive framework that combines load testing, chaos engineering, and end-to-end testing. This evolution wasn't just about adding more tests. It reflected our growing understanding that reliability isn't a single metric but a multifaceted quality that emerges from systematic validation across different dimensions: performance under load, behavior during failures, and consistency over time.

We combine all of the above under the umbrella of what we now call "reliability testing." We define reliability testing as a type of software testing and practice that validates system performance and reliability. It can thus be done over time and with injection failure scenarios (injecting chaos).

If you are interested in more of the evolution of our reliability testing, I gave several Camunda Con Talks and wrote blog posts over the years that you might find interesting:

Why Reliability Testing Matters

We prepare customers for enterprise-scale operations. For this, we need to be confident in building a product that is fault-tolerant, reliable, and that performs well even under turbulent conditions.

For our customers running mission-critical processes, reliability testing provides several crucial benefits:

  • Proactive Issue Detection: We identify problems before they impact production environments. Memory leaks, performance degradation, and distributed system failures that only manifest under specific conditions are caught early in our testing cycles.
  • Confidence in Long-Term Operation: Our endurance tests validate that Camunda can run fault-free over extended periods, ensuring your automated processes won't degrade over time.
  • Graceful Failure Handling: Through chaos engineering, we verify that the platform handles failures elegantly, maintaining data consistency and recovering automatically when possible.
  • Performance Assurance: Continuous load testing ensures that Camunda meets performance expectations (e.g., number of Process Instances / second), even as new features are added and the codebase evolves.

Our Current Testing Arsenal

Today, our reliability testing encompasses two main pillars: load tests and chaos engineering.

Variations of Load Tests

We run different variants of load tests continuously:

  • Release Endurance Tests: Every supported version undergoes continuous endurance testing with artificial workloads, updated with each patch release
  • Weekly Endurance Tests: Based on our main branch, these tests run for four weeks to detect newly introduced instabilities or performance regressions
  • Daily Stress Tests: Shorter tests that validate the latest changes in our main branch under high load conditions

Our workload varies from artificial load (simple process definitions with minimal logic) to typical and realistic, complex processes that mimic real-world usage patterns.

Examples of such processes are:

typical process

complex process

Chaos Engineering

Since late 2019, we've embraced chaos engineering principles to build confidence in our system's resilience. Our approach includes:

  • Chaos Days: Regular events where we manually design and execute chaos experiments, documenting findings in our chaos engineering blog
  • Game Days: Regular events where we simulate an incident in our production SaaS environment to validate our incident response processes
  • Automated Chaos Experiments: Daily execution of 16 different chaos scenarios across all supported versions using our zbchaos tool. We drink our own champagne by using Camunda 8 to orchestrate our chaos experiments against Camunda.

Investing in the Future

With the foundation we’ve established through years of focused reliability testing on the Zeebe engine and its distributed architecture, we’re now expanding that maturity across the entire Camunda product. Our goal is to develop an even more robust and trustworthy product overall. To achieve this, we are consolidating the reliability testing efforts that have historically existed across individual components into a centralized team. This unified approach enables us to scale our testing capabilities more efficiently, ensure consistent best practices, and share insights across teams, ultimately strengthening the reliability of every part of the product.

Some of our upcoming initiatives driven by this team include:

  • Holistic Coverage: We're extending our reliability testing to cover all components of the Camunda 8 platform via a central reliability testing framework.
  • Chaos Engineering: We're planning to introduce new chaos experiments that simulate more complex failure modes, including network partitions, data corruption, and cascading failures.
  • Performance Optimization: Beyond maintaining performance, we utilize our testing infrastructure to identify optimization opportunities and validate improvements.
  • Enhanced Observability: Building on our extensive Grafana dashboards, we continually improve our ability to detect and diagnose issues quickly.
  • Establish Reliability Practices: We're formalizing reliability testing practices and guidelines that can be adopted across all engineering teams at Camunda.
  • Enablement: With the resources we want to enable all of our more than 15 product teams at Camunda to understand, implement, and execute reliability testing principles in their work. Allowing them to build more reliable software from the start and scaling our efforts.

Building Trust Through Transparency

Our commitment to reliability testing isn't just about internal quality assurance – it's about building trust with our customers and the broader community. That's why we:

  • Publish our testing methodologies and results openly
  • Share our learnings through blog posts and conference talks
  • Provide tools like zbchaos as open source for the community

Conclusion

Reliability testing at Camunda has evolved from simple benchmarks to a comprehensive practice that combines load testing, chaos engineering, and end-to-end validation. This evolution reflects our understanding that true reliability emerges from systematic testing across multiple dimensions.

For our customers, this means confidence that Camunda will perform reliably under their most demanding workloads. For engineers interested in joining our team, it represents an opportunity to work with cutting-edge testing practices at scale.

As we continue to invest in reliability testing, we remain committed to transparency and sharing our learnings with the community. After all, the reliability of process automation platforms isn't just a technical challenge – it's fundamental to the digital transformation of businesses worldwide.


Interested in learning more about our reliability testing practices? Check out our detailed documentation, explore our chaos engineering experiments, or reach out to discuss how Camunda's reliability testing ensures your critical processes run smoothly.

Stress testing Camunda

· 12 min read
Christopher Kujawa
Principal Software Engineer @ Camunda

In today's chaos experiment, we focused on stress-testing the Camunda 8 orchestration cluster under high-load conditions. We simulated a large number of concurrent process instances to evaluate the performance of processing and system reliability.

Due to our recent work in supporting load tests for different versions, we were able to compare how different Camunda versions handle stress.

TL;DR: Overall, we saw that all versions of the Camunda 8 orchestration cluster (with focus on the processing) are robust and can handle high loads effectively and reliably. In consideration of throughput and latency, with similar resource allocation among the brokers, 8.7.x outperforms other versions. If we consider our streamlined architecture (which now contains more components in a single application) and align the resources for 8.8.x, it can achieve similar throughput levels as 8.7.x, while maintaining significantly lower latency (a factor of 2). An overview of the results can be found in the Results section below.

info

[Update: 28.11.2025]

After the initial analysis, we conducted further experiments with 8.8 to understand why the measured processing performance was lower compared to 8.7.x. The blog post (including TL;DR) has been updated with the new findings in the section Further Experiments below.

Testing retention of historical PIs in Camunda 8.8

· 4 min read
Rodrigo Lopes
Associate Software Engineer @ Zeebe

Summary:

With Camunda 8.8, a new unified Camunda Exporter is introduced that directly populates data records consumable by read APIs on the secondary storage. This significantly reduces the time until eventually consistent data becomes available on Get and Search APIs. It also removes unnecessary duplication across multiple indices due to the previous architecture.

This architectural change prompted us to re-run the retention tests to compare PI retention in historical indexes under the same conditions as Camunda 8.7.

The historical data refers to exported data from configured exporters, such as records of completed process instances, tasks, and events that are no longer part of the active (runtime) state but are retained for analysis, auditing, or reporting.

The goal of this experiment is to compare the amount of PIs that we can retain in historical data between Camunda 8.7 and 8.8 running with the same hardware.

Resilience of dynamic scaling

· 3 min read
Deepthi Akkoorath
Senior Software Engineer @ Zeebe

With version 8.8, we introduced the ability to add new partitions to an existing Camunda cluster. This experiment aimed to evaluate the resilience of the scaling process under disruptive conditions.

Summary:

  • Several bugs were identified during testing.
  • After addressing these issues, scaling succeeded even when multiple nodes were restarted during the operation.

REST API: From ForkJoin to a Dedicated Thread Pool

· 7 min read
Berkay Can
Software Engineer @ Zeebe

During the latest REST API Performance load tests, we discovered that REST API requests suffered from significantly higher latency under CPU pressure, even when throughput numbers looked comparable. While adding more CPU cores alleviated the issue, this wasn’t a sustainable solution — it hinted at an inefficiency in how REST handled broker responses. See related section from the previous blog post.

This blog post is about how we diagnosed the issue, what we found, and the fix we introduced in PR #36517 to close the performance gap.

Resiliency against ELS unavailability

· 11 min read
Christopher Kujawa
Principal Software Engineer @ Camunda

Due to recent initiatives and architecture changes, we coupled us even more against the secondary storage (often Elasticsearch, but can also be OpenSearch or in the future RDBMS).

We now have one single application to run Webapps, Gateway, Broker, Exporters, etc., together. Including the new Camunda Exporter exporting all necessary data to the secondary storage. On bootstrap we need to create an expected schema, so our components work as expected, allowing Operate and Tasklist Web apps to consume the data and the exporter to export correctly. Furthermore, we have a new query API (REST API) allowing the search for available data in the secondary storage.

We have seen in previous experiments and load tests that unavailable ELS and not properly configured replicas can cause issues like the exporter not catching up or queries not succeeding. See related GitHub issue.

In todays chaos day, we want to play around with the replicas setting of the indices, which can be set in the Camunda Exporter (which is in charge of writing the data to the secondary storage).

TL;DR; Without the index replicas set, the Camunda Exporter is directly impacted by ELS node restarts. The query API seem to handle this transparently, but changing the resulting data. Having the replicas set will cause some performance impact, as the ELS node might run into CPU throttling (as they have much more to do). ELS slowing down has an impact on processing as well due to our write throttling mechanics. This means we need to be careful with this setting, while it gives us better availability (CamundaExporter can continue when ELS nodes restart), it might come with some cost.

Dynamic Scaling: probing linear scalability

· 7 min read
Carlo Sana
Senior Software Engineer @ Zeebe

Hypothesis

The objective of this chaos day is to estimate the scalability of Zeebe when brokers and partitions are scaled together: we expect to be able to see the system scaling linearly with the number of brokers/partition in terms of throughput and back pressure, while maintaining predictable latency.

General Experiment setup

To test this, we ran a benchmark using the latest alpha version of Camunda 8.8.0-alpha6, with the old ElasticsearchExporter disabled, and the new CamundaExporter enabled. We also made sure Raft leadership was balanced before starting the test, meaning each broker is leader for exactly one partition, and we turned on partition scaling by adding the following environment variable:

  • ZEEBE_BROKER_EXPERIMENTAL_FEATURES_ENABLEPARTITIONSCALING=true

Each broker also has a SSD-class volume with 32GB of disk space, limiting them to a few thousand IOPS. The processing load was 150 processes per second, with a large payload of 32KiB each. Each process instance has a single service task:

one-task

The processing load is generated by our own benchmarking application.

Initial cluster configuration

To test this hypothesis, we will start with a standard configuration of the Camunda orchestration cluster:

  • 3 nodes
  • 3 partitions
  • CPU limit: 2
  • Memory limit: 2 GB

We will increase the load through a load generator in fixed increments until we start to see the nodes showing constant non zero backpressure, which is a sign that the system has hit its throughput limits.

Target cluster configuration

Once that level of throughput is increased, we will scale broker & partitions while the cluster is under load to the new target value:

  • 6 nodes
  • 6 partitions
  • CPU limit: 2
  • Memory limit: 2 GB

Experiment

We expect that during the scaling operation the backpressure/latencies might worsen, but only temporarily, as once the scaling operation has completed, the additional load it generate is not present anymore.

Then, we will execute the same procedure as above, until we hit 2x the critical throughput hit before.

Expectation

If the system scales linearly, we expect to see similar level of performance metrics for similar values of the ratios PI (created/complete) per second / nr. of partition.

Steady state

The system is started with a throughput of 150 Process instances created per second. As this is a standard benchmark configuration, nothing unexpected happens:

  • The same number of process instances are completed as the ones created
  • The expected number of jobs is completed per unit of time

At this point, we have the following topology:

initial-topology

First benchmark: 3 broker and 3 partitions

Let's start increasing the load incrementally, by adding 30 Process instances/s for every step.

TimeBrokersPartitionsThroughputCPU UsageThrottling (CPU)Backpressure
09:3033150 PI/s, 150 jobs/s1.28 / 1.44 / 1.0212% / 7% / 1%0
09:4933180 PI/s, 180 jobs/s1.34 / 1.54 / 1.1220% / 17% / 2%0
10:0033210 PI/s, 210 jobs/s1.79 / 1.62 / 1.3328% / 42% / 4%0
10:1233240 PI/s, 240 jobs/s1.77 / 1.95 / 1.6245% / 90% / 26%0/0.5%

At 240 Process Instances spawned per second, the system starts to hit the limits: CPU usage @ 240 PI/s CPU throttling@ 240 PI/s

And the backpressure is not zero anymore: Backpressure @ 240 PI/s

  • The CPU throttling reaches almost 90% on one node (this is probably caused by only one node being selected as gateway as previously noted)
  • Backpressure is now constantly above zero, even if it's just 0.5%, it's a sign that we are reaching the throughput limits.

Second part of the benchmark: scaling to 6 brokers and 6 partitions

With 240 process instances per second being spawned, we send the commands to scale the cluster.

We first scale the zeebe statefulset to 6 brokers. As soon as the new brokers are running, even before they are healthy, we can send the command to include them in the cluster and to increase the number of partition to 6.

This can be done following the guide in the official docs.

Once the scaling has been completed, as can be seen from the Cluster operation section in the dashboard, we see the newly created partitions participate in the workload.

We now have the following topology:

six-partitions-topology

As we did before, let's start increasing the load incrementally as we did with the other cluster configuration.

TimeBrokersPartitionsThroughputCPU UsageThrottling (CPU)BackpressureNotes
10:2766240 PI/s0.92/1.26/0.74/0.94/0.93/0.932.8/6.0/0.3/2.8/3.4/3.180After scale up
11:0566300 PI/s1.17/1.56/1.06/1.23/1.19/1.189%/29%/0.6%/9%/11%/10%0Stable
11:1066360 PI/s1.39/1.76/1.26/1.43/1.37/1.4219%/42%/2%/16%/21%/22%0Stable
11:1066420 PI/s1.76/1.89/1.50/1.72/1.50/1.7076%/84%/52%/71%/60%/65%0 (spurts on 1 partition)Pushing hard

However, at 11:32 one of the workers restarted, causing a spike in the processing due to jobs being yielded back to the engine, less jobs to be activated, and thus less to be completed. This caused a job backlog to build up in the engine. Once the worker restarted, the backlog was drained, leading to a spike in job completion requests: around 820 req/s, as opposed to the expected 420 req/s.

Because of this extra load, the cluster started to consume even more CPU, resulting in heavy CPU throttling from the cloud provider.

CPU usage @ 420 PI/s CPU throttling @ 420 PI/s

On top of this, eventually a broker restarted (most likely as we run on spot VMs). In order to continue with our test, we scaled the load down to 60 PI/s to give the cluster the time to heal.

Once the cluster was healthy again, we raised the throughput back to 480 PI/s to verify the scalability with twice as much throughput as the initial configuration.

The cluster was able to sustain 480 process instances per second with similar levels of backpressure of the initial configuration:

Backpressure @ 480 PI/s

We can see below that CPU usage is high, and there is still some throttling, indicating we might be able to do more with a little bit of vertical scaling, or by scaling out and reducing the number of partitions per broker:

CPU usage @ 480 PI/s CPU throttling

Conclusion

We were able to verify that the cluster can scale almost linearly with new brokers and partitions, so long as the other components, like the secondary storage, workers, connectors, etc., are able to sustain a similar.

In particular, making sure that the secondary storage is able to keep up with the throughput turned out to be crucial to keep the cluster stable in order to avoid filling up the Zeebe disks, which would bring to a halt the cluster.

We encountered a similar issue when one worker restarts: initially it creates a backlog of unhandled jobs, which turns into a massive increase in requests per second when the worker comes back, as it starts activating jobs faster than the cluster can complete them.

Finally, with this specific test, it would be interesting to explore the limits of vertical scalability, as we often saw CPU throttling being a major blocker for processing. This would make for an interesting future experiment.

Follow up REST API performance

· 26 min read
Christopher Kujawa
Principal Software Engineer @ Camunda

Investigating REST API performance

This post collates the experiments, findings, and lessons learned during the REST API performance investigation.

There wasn't one explicit root cause identified. As it is often the case with such performance issues, it is the combination of several things.

Quint essence: REST API is more CPU intense/heavy than gRPC. You can read more about this in the conclusion part. We have discovered ~10 issues we have to follow up with, where at least 2-3 might have a significant impact in the performance. Details can be found in the Discovered issues section

Performance of REST API

· 8 min read
Christopher Kujawa
Principal Software Engineer @ Camunda

In today's Chaos day we wanted to experiment with the new REST API (v2) as a replacement for our previous used gRPC API.

Per default, our load tests make use of the gRPC, but as we want to make REST API the default and release this fully with 8.8, we want to make sure to test this accordingly in regard to reliability.

TL;DR; We observed severe performance regression when using the REST API, even when job streaming is in use by the job workers (over gRPC). Our client seems to have a higher memory consumption, which caused some instabilities in our tests as well. With the new API, we lack certain observability, which makes it harder to dive into certain details. We should investigate this further and find potential bottlenecks and improvements.

general

How does Zeebe behave with NFS

· 15 min read
Christopher Kujawa
Principal Software Engineer @ Camunda

This week, we (Lena, Nicolas, Roman, and I) held a workshop where we looked into how Zeebe behaves with network file storage (NFS).

We ran several experiments with NFS and Zeebe, and messing around with connectivity.

TL;DR; We were able to show that NFS can handle certain connectivity issues, just causing Zeebe to process slower. IF we completely lose the connection to the NFS server, several issues can arise, like IOExceptions on flush (where RAFT goes into inactive mode) or SIGBUS errors on reading (like replay), causing the JVM to crash.