Release Process
This document will become a knowledge base around the C8 monorepo release process and help Camundi to maintain the process. It primarily focuses on the release process of Zeebe and the C8 webapps after they included with the monorepo merger (Camunda 8.6+).
Scope & Goal
The goal of the C8 monorepo release process is to produce artifacts for patch, alpha and minor version releases of Camunda 8 components like Zeebe and (on 8.6+) most C8 webapps for SaaS and Self-Managed usage in a timely fashion. This includes the ZPT (zeebe-process-test) project. Optimize is released separately for 8.6 to 8.8 (at least).
It also involves automated and manual QA activities on release candidate builds to ensure bug-free artifacts on the final artifacts, e.g. certain benchmarks for Zeebe and interactive tests for C8 webapps.
Required inputs:
- version of camunda-cloud/identity to use for any given C8 monorepo release
- whether a release candidate build and manual QA is necessary for patch releases
Produced artifacts include Maven artifacts on Maven Central, Docker images on DockerHub, GitHub releases with release notes and announcements.
There must not be 2 concurrent releases ongoing with the same major and minor version (e.g. no 8.99.1 and 8.99.2 patch releases at the same time).
Caveat: Many places still refer to this process as “Zeebe release process” although with 8.6+ it is the monorepo release process also involving other components like C8 webapps.
Not in scope of this process are SNAPSHOT releases.
Release Types
- Minor (version format is 8.x.0):
- includes new features, for production use
- usually released every 6 months
- supported for certain time with new patch releases after the minor release
- code freeze before the release, to allow QA on a release candidate build
- source branch is
stable/${minor_version}
- Alpha (version format 8.x.0-alphaN)
- includes new features to get early customer feedback, not for production use
- usually released every 1 month
- code freeze before the release, to allow QA on a release candidate build
- source branch is
main
- Patch (version format 8.x.y):
- includes security updates and bug fixes, for production use
- usually released every 1 month (or on demand, e.g. if critical CVE needs to be fixed sooner)
- no code freeze and no release candidate build
- source branch is
stable/${minor_version}
See this self-updating release timeline page.
Release Dependencies
Caveat: While Optimize is part of the C8 monorepo, it has its own release process separate from the C8 monorepo release process (covers only Operate/Tasklist/Zeebe).
Component releases that the C8 monorepo releases depend on:
Components that depend on the C8 monorepo releases:
- C8 SaaS
- Helm Chart
- C8 Optimize
Artifacts
- Maven Central
- DockerHub:
- GitHub releases
- SBOM information on FOSSA
Implementation
The C8 monorepo release process is implemented using BPMN models orchestrated by Camunda 8 SaaS, GitHub Action workflows and manual tasks (called "User tasks"). The GitHub Action workflows listen on webhook events via the repository_dispatch mechanism. Those events are generated by C8 via the GitHub API.
BPMN models
The BPMN models are developed, versioned and tested in this (internal) repository. BPMN models for noteworthy processes are:
- Camunda Patch Release: is run for performing patch releases for versions 8.6+. Gathers the required inputs from the release manager and then starts “Camunda Release” process
- Camunda Release: Implements a unified release process for all release type since version 8.6:
- is started by the release manager with all the required inputs.
- for non-patch releases waits for code freeze time.
- performs the release process including artifact generation, uploads and QA, for a specific version
Caveat: Some places still refer to those BPMN models as “Zeebe Release” although with 8.6+ the processes are responsible for multiple components including Zeebe and C8 webapps.
Resources:
- Free BPMN course on Camunda Academy
- (Internal) Recording on the release process implementation
Runtime
Those BPMN models are tested and deployed from main via GitHub Actions into an (internal) Camunda 8 SaaS cluster. Resources:
Secrets are configured as part of the C8 SaaS cluster and available ones are for APIs of e.g. GitHub, Slack, Opsgenie, Testrail. Used C8 connectors:
- GitHub Webhook
- HTTP (for other APIs)
- Slack (mainly for notifications)
GitHub Action workflows
Centerpiece for the camunda/camunda monorepo is a reusable workflow on main and each stable/* branch. This workflow handles the full release pipeline: building, scanning, publishing artifacts and Docker images, and notifying teams via Slack.
🔧 How It's Triggered
Releases are triggered in two ways:
-
For dry runs, by using
on: scheduletriggers defined in the workflow itself (no Slack noise here). -
For real releases, by calling the GitHub Actions REST API directly with a
workflow_dispatchrequest:curl -X POST https://api.github.com/repos/camunda/camunda/actions/workflows/camunda-platform-release.yml/dispatches \
-H "Authorization: token $TOKEN" \
-d '{"ref": "stable/8.7", "inputs": {"releaseVersion": "8.7.x", "nextDevelopmentVersion": "8.7.y-SNAPSHOT", ...}}'
📝 This replaces the older dispatch-release-* workflows, which have now been removed.
For the ZPT (zeebe-process-test) project there is automation to create the release branch, build and upload the Maven artifacts, only merging a release branch is manual.
Benchmark Tests
🤔 What are the benchmarks
- There’s a GKE Kubernetes cluster to run the benchmarks:
- Cluster name:
camunda-benchmark-prod - Maintained by the infra team
- Cluster name:
- This Kubernetes cluster has a monitoring stack installed (Prometheus, Grafana)
- There's a dashboard to observe the status of the benchmarks
- Every created benchmark has a dedicated namespace in the Kubernetes cluster (e.g.
c8-release-8-7-x) - There's a benchmark for every currently supported (maintained) version + previously released alpha version.
- A benchmark is installed/upgraded by this GHA workflow
- Under the hood, it's running a
helm install --upgradecommand for a benchmark creation/update - installs a special Helm chart containing the load test applications
- installs the standard Camunda 8 Helm chart
- Under the hood, it's running a
🍃 Benchmark flow
- A benchmark is automatically created via GitHub Action call after a release candidate (RC) is triggered (GHA workflow)
- Benchmark creation applies only to alpha, minor, and major releases.
- Patches don't require a new benchmark creation. What is required is to re-use the same namespace + the recently released patch version, and update the the applicable benchmark with the newest patch version.
- If during the release, new commits are merged into the release branch (major, minor, alpha), the corresponding benchmark needs to be updated to run the code from latest commit of the release branch.
- At end end of the release process:
- For the alpha/minor/major release, a benchmark for this version is to be updated to its latest image (i.e. no RC running in the benchmark)
- E.g. if a version 8.8.0 is released, the benchmark should be running 8.8.0 not 8.8.0-RC1
- Additionally, for alpha releases, previous alpha for the current version is to be removed
- E.g. if 8.8.0-alpha6 is released, a benchmark for 8.8.0-alpha5 is be removed, for 8.8.0-alpha6 running
- For the patch releases, a benchmark for this minor version is to be update to this version)
- E.g. if a version 8.7.10 is released, the benchmark for 8.7 (
c8-release-8-7-x) will be updated to use this version as a part of the 8.7.10 patch release.
- E.g. if a version 8.7.10 is released, the benchmark for 8.7 (
- For the alpha/minor/major release, a benchmark for this version is to be updated to its latest image (i.e. no RC running in the benchmark)
📁 Example
| Release Version | Benchmark Namespace in Kubernetes | Patch Release | Alpha Release |
|---|---|---|---|
| 8.5.x | c8-release-8-5-x | • Update benchmark to use newly released image version via GHA workflow • Other benchmarks are untouched | (does not happen) |
| 8.6.x | c8-release-8-6-x | • Update benchmark to use newly released image version via GHA workflow • Other benchmarks are untouched | (does not happen) |
| 8.7.x | c8-release-8-7-x | • Update benchmark to use newly released image version via GHA workflow • Other benchmarks are untouched | (does not happen) |
| 8.8.0-alphaN | c8-release-8-8-0-alphaN | (does not happen) | • c8-release-8.8.0-alphaN and c8-release-8.8.0-alpha(N+1) benchmarks coexist during the release of c8-release-8.8.0-alpha(N+1)• after 8.8.0-alpha(N+1) release → c8-release-8-8-0-alphaN is deleted• Other benchmarks are untouched |
More details on how benchmark tests work can be found in our reliability testing documentation.
👨🔧 Release Manager Ownership for Benchmark Issues
Owner during releases: The zeebe-release-manager owns benchmark/load-test issues for the release (for as long as this role exists).
When a release benchmark fails, the release manager (RM) must:
- Include: failing CI job link, version/RC + release line, and Grafana dashboard link.
- If it's deploy/infra/Helm/cluster health, ping Reliability/Testing (and reference medics).
- If it's an application-level regression, escalate to the owning product team (usually Core Features first), CC Reliability/Testing if infra may be involved.
- Make sure medics are aware, since they watch daily/weekly/release load tests until alerts are in place.
Camunda 8 Testing Clusters
Camunda 8 testing clusters provide isolated environments for validating release builds, executing targeted test flows, and verifying features and bug fixes across all core components. The cluster creation and management processes can be manual or automated depending on the test type and the stage of the monorepo release process.
📡 Cluster Usage Scenarios
- Component Coverage: Identity, Operate, Optimize, and Tasklist.
- Pre-release Validation: Isolate new release builds before broad deployment and confirm fixes or features behave as expected.
- Test Isolation: Support E2E, chaos, benchmark, reliability, andload test types.
🚂 Monorepo Release Process Mapping
Manual Clusters
- Clusters are provisioned manually for manual testing flows. They are used to enable Webapp testing across the four main apps (Identity, Operate, Optimize, and Tasklist).
- Manual creation is carried out by QA engineers via UI - soon to be automated by CAMUNDA-32765.
- The process follows qa_manual_create_cluster.md.
- For each Release Candidate, a new manual test cluster generation is created with the current RC version.
Automated Clusters
- Automated flows use GitHub workflows like: zeebe-qa-testbench.yaml and zeebe-testbench.yaml to orchestrate the creation of clusters for benchmark, reliability, E2E, and chaos tests.
- The clusters for automated tests are created for every Release Candidate, mirroring the manual flow but managed programmatically.
- Additional details about test setup can be found in our reliability testing documentation.
👥 Ownership and Terminology
- Manual Test Clusters and Manual Tests: Owned by QA engineers.
- Automated Test Clusters and Automated Tests: Development & Maintainability owned by the Zeebe team, Release monitoring owned by Monorepo Release Manager, orchestrated via GitHub Actions.
- Post-release QA: Historically managed by the Monorepo Release Manager.
- Terminology Note: The label “QA” is a catch-all for testing steps in the release process, not a strict indicator of team ownership. Consider replacing “QA” with more precise terms such as “Test Validation” or “Release Testing” for documentation clarity. (CAMUNDA-35223)
Best Practices
For BPMN User Tasks:
- Assign on creation automatically to the correct person for better UX and ownership
For BPMN activities using REST connector:
- Configure retries with appropriate interval for idempotent requests, to workaround network problems/short service interruption
- e.g. use 3 retries with 10 seconds interval on
GETrequests
- e.g. use 3 retries with 10 seconds interval on
- HTTP error handling of the remote API via escalation events
- Also consider using retries for transient errors
- e.g. send a notification via Slack in #top-monorepo-release
- Use authentication to avoid API ratelimits of GitHub and DockerHub for increased reliability
Calling and integrating with GitHub Actions workflows:
- Use C8 REST connector to call GitHub REST API to dispatch workflows
- Use GHA
workflow_dispatchinputs to give custom data as parameters like release branch names or versions - Avoid release managers polling for GHA workflow completion either by:
- Create 2nd BPMN activity that queries completion in a loop and makes the process wait for GHA
- Send a Slack notification from GHA workflow in #top-monorepo-release to notify the release manager
Change Management & Testing
Changes to the release process are done in these steps:
- Implementation:
- Check out this (internal) repository locally
- Use the Desktop Modeler locally to modify the BPMN models
- Adjust/extend the Java/Kotlin unit tests covering the relevant BPMN models
- Create a Pull Request against
mainincluding screenshots + description to explain the change
- Testing:
- If testable in isolation/with dry run: demonstrate that change works
- If not testable with dry run: wait for next release
- Review:
- Get a peer review from a Monorepo DevOps team member or C8 engineer familiar with the process
- Rollout:
- Merge the Pull Request
- If necessary, migrate currently running process instances in Operate to new version of the process
Issue Tracking
All problems, bugs and feature requests regarding the C8 release process are tracked using GitHub Issues.
For visibility and prioritization there is the (internal) Monorepo Release project board that tracks high-level issues.
For CI-related issues in the release process, also see our CI & Automation documentation.
DRIs
This section is subject to change in #28528.
Release Manager
There is one DRI called the “monorepo release manager” who oversees all running and newly launched release process instances end-to-end and is responsible for successfully finishing the releases (8.6+) according to their timelines. This DRI rotates every month (Slack groups etc. are manually updated) at the start of the month. Handover notes are documented in tabs here.
Some tasks during the release process are also taking care of other DRIs. The release manager can also rely on help from medics.
The release manager is currently selected manually for each release.
The acting subject for release-process tasks and checklist items is the Monorepo Release Manager currently on shift when the step becomes due. For minor releases that span multiple monthly rotations, responsibility is handed over as part of the regular shift change together with the current checklist state and any open follow-up items.
The checklist itself is maintained collectively. Every MRM is expected to update it when a gap, obsolete step, or missing safeguard is discovered during their shift so the next handover starts from the latest known process.
Caveat: Some places still refer to this as “Zeebe release manager” although with 8.6+ the release manager is responsible for multiple components including Zeebe and C8 webapps.
Others
QA Release Manager: can help with questions around steps the QA team performs.
Caveat: Not all steps with "QA" in the name mean that the QA team is actually involved, so check twice
Zeebe Release Manager: can help with Zeebe-specific questions and tasks.
Backporting Guidelines
We want the release process for all supported versions 8.6+ to be as similar as possible, to reduce maintenance effort, surprises and mental load. Improvements and fixes to the release process should always apply to all supported versions, if possible.
For CI-related changes, refer to our CI & Automation documentation and the backporting guidelines.
Minor Release Considerations
Use the following checklist as the operational source of truth for every minor release.
Ownership model:
- The checklist is owned collectively by the MRMs and evolves through updates from each shift.
- The acting subject for each checklist item is the MRM on shift when that item becomes relevant, not necessarily the MRM who started preparing the minor.
- Open checklist items and context should be explicitly handed over at monthly MRM rotation.
Minor Release Readiness Checklist
0. Dates, ownership, and high-level alignment
- Confirm official minor release, feature freeze, and code freeze dates from the C8 Release Train. See Feature Freeze vs Code Freeze for definitions and timing.
- Confirm the Monorepo Release Manager (MRM) is available for feature freeze week, code freeze / branch creation, and the final RC window.
- Check for major holidays during RC and final release weeks, then adjust expectations with release stakeholders.
1. Around feature freeze / last alpha (branch strategy)
- Send feature freeze communication before the last alpha using the feature-freeze template and explicitly state that only bug fixes and stabilization are expected after freeze.
- Create
stable/<minor>frommainbefore the last alpha according to the early-stable strategy (i.e. create thestable/<minor>branch before the last alpha and branch all subsequent alpha/RC/final release branches fromstable/<minor>instead ofmain). - [DEPRECATED for +8.10] Mirror the same strategy in zeebe-process-test: create
stable/<minor>frommainand align release-branch handling. - Create
backport stable/<minor>label in monorepo. - [DEPRECATED for +8.10] Create
backport stable/<minor>label in ZPT. - Announce stable branch creation and backport procedure (label +
/backport) in the relevant engineering channels.
2. Versioning and branch plumbing (after last alpha branch exists)
- On monorepo
main, bump allpom.xmlversions to8.(x+1).0-SNAPSHOTusing:./mvnw release:update-versions -DdevelopmentVersion=8.(x+1).0-SNAPSHOT
- [DEPRECATED for +8.10] On ZPT
main, bump versions to the next minor snapshot line. - Update upgrade and migration test configuration so the previous minor upgrades to the new minor line.
- Confirm artifact expectations:
stable/<minor>produces<minor>.0-SNAPSHOTmainproduces<next-minor>.0-SNAPSHOT(or genericSNAPSHOTwhere expected)
3. CI, protections, and release workflow wiring
- Add
stable/<minor>tounified-ci-merges-stable-branchesprotection/ruleset configuration in infra-core. - Verify release BPMN configuration uses
stable/<minor>as source branch for minor SHAs and merge-back behavior before running the minor. - Verify code freeze date and monorepo release start date are filled correctly when starting a minor release BPMN instance.
- Verify CI / SLO dashboards include
stable/<minor>and surface regressions for that branch. - Ensure a scheduled release dry-run exists for
stable/<minor>and is green before starting the minor.
4. Optimize, Docker images, and artifacts
- Verify Optimize is included for the current minor in monorepo release (
includeOptimize=true, 8.9+ strategy). - Verify stable branches build and publish Optimize Docker images for
<minor>-SNAPSHOTand release tags.
5. Backports, RCs, and merge-backs
- Enforce bug-fix backport rule after last alpha branch cut: fixes merged to
mainmust be backported tostable/<minor>to ship in that minor. - For critical fixes after branch cut, backport to both
stable/<minor>and active alpha/minor release branch; trigger a new RC if needed. - Track minor backports via labels/board to avoid missing required fixes.
- Simulate release-branch merge-back to
stable/<minor>early to detect predictable conflicts.
6. Documentation and communication hygiene
- Keep this file current for minor branch strategy, bug-fix backport rules, and feature freeze vs code freeze definitions.
7. Watch-outs and sanity checks
- Verify no outdated temporary branch overrides or conditions remain from previous minors before starting a new minor BPMN instance.
- Verify preview/smoke-test workflows target
stable/<minor>with existing<minor>-SNAPSHOTimages.
Minor Release References
- Release process documentation
- C8 Release Train
- Minor Release Feature Freeze
- Issue #46249
- Issue #40009
- Issue #37374
- Issue #38074
Possible Issues When Cutting The Stable Branch
Zeebe update tests failing
- Symptom:
IllegalStateException: Snapshot is not compatible with current version: SkippedMinorVersion[from=8.8.0, to=8.10.0] - Context: recurring issue around the stable-branch cut and version-line switch. See Christian's note: "Hi folks, every 6 months the same question..."
- Known workaround: disable the affected tests with an explicit follow-up and re-enable them after the final minor release is published.
CPT integration tests failing
- Symptom:
Can't get Docker image: RemoteDockerImage(imageName=camunda/camunda:8.9-SNAPSHOT) - Context: this happened due to wrong test naming during the stable-branch transition. See Remco's note:
@monorepo-ci-medic Backports to stable/8.9... - Known workaround: there is no generic workaround beyond fixing the incorrect test name or image reference.
Feature Freeze vs Code Freeze (Minor Releases)
Terminology:
- Release date: (in scope of the Monorepo) The day on which final Monorepo artifacts should be tested and available for a release
- Code freeze date: The day on which the release branch will be forked from the base branch
- Press release date: (out of scope of the Monorepo) The day on which the whole C8 release train should be finished
For C8 monorepo minor releases, we enforce two distinct stages to ensure quality and predictable delivery:
🔒 Feature Freeze (Minor Releases)
- Purpose: Lock in the feature scope for the upcoming minor release
- Timing: Occurs with the last alpha before the minor release (e.g., for
8.9.0, this would be8.9.0-alpha5), on the day of the code freeze of the last alpha - What Changes:
- ✅ All cross-component features targeted for the minor must be fully implemented, documented, and working end-to-end
- ❌ No new features, scope extensions, or risky changes after this point
- ✅ Bug fixes, stabilization work, and E2E testing continue
- Notification Process: Send calendar invite to engineering teams (Core Features, Orchestration, QA, DevOps/Release, and other relevant teams) with:
-
Subject:
Camunda repo (Zeebe/Operate/Tasklist/Identity/Optimize) Release Minor <version> - Feature Freeze -
Body:
Hey all,
This appointment marks the feature freeze for the camunda/camunda repository: <minor_version> (minor).
<last_alpha_version> is the last alpha before the minor and defines the scope of what will ship in <minor_version>. Any new features or scope changes must be merged before this point to make it into the minor.
After this date, we focus on bug fixing, stabilization, and end-to-end testing for <minor_version>. New features should target future alphas/minors instead.
Overall release manager is <release_manager_name>
Have a nice week!
-
🚫 Code Freeze (Minor Releases)
- Purpose: Minimize code changes to ensure release stability
- Timing: Three weeks before the press release (e.g., release branch creation and thus code freeze for
8.9.0is on March 24th since press release is on April 14th) - What Changes:
- ✅ Only critical changes allowed (release blockers, severe regressions, security issues)
- ❌ Non-critical changes deferred to future alphas or patch releases
- Coordination: Scheduled via dedicated calendar invite managed by respective teams
📅 Important References
- Release Dates: All upcoming alpha, minor, and feature freeze dates are maintained on the C8 Release Train page
- Detailed Policy: See Minor Release Feature Freeze for comprehensive guidelines
Troubleshooting
How to correlate Git commits with deployed process version in C8 Operate?
There is currently no way to see metadata (e.g. deploy timestamp) in or download BPMN XML for a certain version from Operate UI.
Workaround: Open a process instance in the Operate UI and use network tab of browser and look at /xml endpoint response after a page refresh.
I lack permissions in the (internal) Camunda 8 SaaS cluster?
Reach out via Slack to ask for the permissions.
How to retry a camunda-platform-release.yml job that failed mid-build?
The camunda-platform-release.yml GHA workflow uploads artifacts (Maven, Docker) to certain 3rd party services. If it fails unexpectedly after having already uploaded some, but not all artifacts, there is some cleanup needed. Only after the cleanup you can retry safely:
Procedure:
- remove new info on GitHub (reset branch to before new Git commits, delete new Git tag)
- drop staged Maven artifacts from Maven Central
- do nothing re Artifactory (artifacts will be overwritten)
- do nothing re DockerHub (artifacts will be overwritten)
- retrigger the failing camunda-platform-release.yml job
I want/need to retry/skip certain parts of a BPMN process?
In C8 Operate, you can use process instance modification to change variables of the process or move to a different activity. If that activity is earlier in the process, it will be retried. If it is later, it will skip intermediate steps.
All of the above operations can be dangerous and lead to unexpected behavior or inconsistencies (outdated/lacking variables, lacking preconditions for later steps, duplicate effects that are visible externally). Proceed with caution and ask another engineer to review beforehand!
How to remove accidentially published artifacts from Artifactory?
As a first step, you need to find out how to best identify the artifacts to be deleted:
- Do all affected artifacts (and only those) have a specific version (e.g. 8.99.0-dryrun)?
- Do you have a log of GitHub Actions workflow run uploading the files to identify them?
If you know a specific version (and ideally have the the log of a GHA workflow run), you can use this script. The script source code has detailed usage instructions. Since the C8 monorepo currently publishes Operate, Tasklist and Zeebe a suitable deletion command for example for version 8.99.0-dryrun is:
REPOSITORY=zeebe-io ARTIFACTORY_PATH="io/camunda*" VERSION=8.99.0-dryrun python delete_artifacts.py
REPOSITORY=camunda-operate ARTIFACTORY_PATH="io/camunda*" VERSION=8.99.0-dryrun python delete_artifacts.py
REPOSITORY=camunda-zeebe-tasklist ARTIFACTORY_PATH="io/camunda*" VERSION=8.99.0-dryrun python delete_artifacts.py
If you have a log of a GHA workflow run called log.txt, you can use grep to identify exact URLs of all uploaded files to Artifactory:
cat log.txt | grep "Uploaded to camunda-nexus" | grep -oP "https://artifacts.camunda.com/[^\s]+"
You can identify affected repositories of Artifactory from a log.txt, you can use the previous command and "group" by repository name:
cat log.txt | grep "Uploaded to camunda-nexus" | grep -oP "https://artifacts.camunda.com/[^\s]+" | cut -d'/' -f5 | sort | uniq
I have another problem?
There is a document with known problems and workarounds (if available) that should be consulted first, alongside with open issues from Issue Tracking. The user tasks of the release process have documentation attached that gives guidance.
If there is still questions, reach out via Slack.
Consider opening an incident for serious issues (see below).
Incident Process
If you discover serious issues during the Monorepo Release process (while working on any of its subtasks), you can start the incident (per usual process with /inc command in Slack).
Please select incident type: C8 Monorepo Release incident.
Who can start the incident:
- Anyone participating in the current release process (Release Manager, QA Engineers, etc.)
- Anyone from the Orchestration cluster teams
FAQ
1. Should I request a patch release or a customer-specific (hotfix) Docker image?
🔧 Request a customer-specific Docker image (hotfix) when:
- The fix has not yet been merged into main or a stable branch
- You want to test the fix early with a specific customer or in an internal environment (e.g., Camunda SaaS dev)
- You are not yet sure if the fix is correct or complete
➡️ How to: follow these instructions.
Note: Hotfixes are not part of the official release process and should not be used as a substitute for patch releases. Hotfixes should be used by teams to validate a solution without impacting the general release flow.
📦 Request a general patch release when:
- The fix has been merged and verified
- It’s relevant to all users or addresses a broad issue, not just one customer
- You’re ready to make the change available in an official release
➡️ How to: use the Request Monorepo Patch release Slack workflow in this channel and fill the form.
2. What's the git commit mechanics of the maven-release-plugin?
Release process roughly looks like this:
- release branch is created (forked from
mainorstable/x.y) - on the release branch, as a part of the release process, maven-release-plugin creates 2 commits sequentially:
- commit 1 (example): bumping pom.xml files to the version we want to release (e.g.
8.8.0-SNAPSHOT->8.8.0-alpha6)- A git tag for the release (e.g.
8.8.0-alpha6) is created from this commit - The release is built from this commit.
- A git tag for the release (e.g.
- commit 2 (example): bumping pom.xml files for the next development version (e.g.
8.8.0-alpha6->8.8.0-SNAPSHOT)
- commit 1 (example): bumping pom.xml files to the version we want to release (e.g.
If one needs to retry the failed release (assuming no need to clear the released artifacts), need to do:
- delete these two commits from the git history
- delete the GitHub release and git tag from GitHub
3. How do Monorepo releases relate to the Big Release Train?
The Monorepo release produces core backend artifacts, while the Big Release Train bundles downstream services and UI components. The train can only depart once the Monorepo artifacts are confirmed as released.
While Monorepo (camunda/camunda) releases artifacts for:
The big release train releases:
- Identity Management Component (camunda-cloud/identity)
- Connectors (camunda/connectors)
- Web Modeler (camunda/web-modeler)
- Monorepo ⭐ ← can only be done once monorepo artifacts are released (information gathered by
confirm-success-release-trainform) - Optimize (camunda/camunda)
- Console (camunda/camunda-cloud-management-apps)