Deprecation Notice
Caution
This community-maintained project will be archived.
- For the fully supported Camunda Spring SDK, refer to the docs.
- The community-maintained Operate API client was moved to a dedicated repository.
In the future, Camunda will expand on the fully supported Camunda Spring SDK to support Camunda 8 REST API for the entire orchestration cluster.
Getting Started
- Version compatibility
- Examples
- Quickstart
- Add Spring Boot Starter to your project
- Configuring Camunda 8 SaaS connection
- Connect to Zeebe
- Implement job worker
- Writing test cases
- Run Connectors
- Connect to Operate
Documentation
This project allows you to leverage Zeebe and Operate within your Spring or Spring Boot environment.
This client does not set any dependency version. If you are facing dependency version issues, please manage the dependency version in your project. In case this client is not compatible with a required dependency version, please open an issue.
Spring Zeebe version | JDK | Camunda version | Bundled Spring Boot version | Compatible Spring Boot versions |
---|---|---|---|---|
>= 8.5.0 | >= 17 | 8.5.0 | 3.2.5 | >= 3.x.x |
>= 8.4.0 | >= 17 | 8.4.0 | 3.2.0 | >= 2.7.x, 3.x.x |
>= 8.3.4 | >= 17 | 8.3.4 | 3.2.0 | >= 2.7.x, 3.x.x |
>= 8.3.0 | >= 17 | 8.3.1 | 2.7.7 | >= 2.7.x, 3.x.x |
>= 8.3.0 | >= 8 | 8.3.1 | 2.7.7 | >= 2.7.x |
Full examples, including test cases, are available here: Twitter review example, process solution template. Further, you might want to have a look into the example/ folder.
Create a new Spring Boot project (e.g. using Spring initializr), open a pre-existing one you already have, or fork our Camunda 8 Process Solution Template.
Add the following Maven repository and dependency to your Spring Boot Starter project:
<repositories>
<repository>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
<id>identity</id>
<name>Camunda Identity</name>
<url>https://artifacts.camunda.com/artifactory/camunda-identity/</url>
</repository>
</repositories>
<dependency>
<groupId>io.camunda.spring</groupId>
<artifactId>spring-boot-starter-camunda</artifactId>
<version>8.5.2</version>
</dependency>
Although Spring Zeebe has a transitive dependency to the Zeebe Java client, you could also add a direct dependency if you need to specify the concrete version in your pom.xml
(even this is rarely necessary):
<dependency>
<groupId>io.camunda</groupId>
<artifactId>zeebe-client-java</artifactId>
<version>8.5.0</version>
</dependency>
Note that if you are using @Variable, compiler flag -parameters
is required for Spring-Zeebe versions higher than 8.3.1.
If using Maven:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<compilerArgs>
<arg>-parameters</arg>
</compilerArgs>
</configuration>
</plugin>
</plugins>
</build>
If using Gradle:
tasks.withType(JavaCompile) {
options.compilerArgs << '-parameters'
}
If using Intellij:
Settings > Build, Execution, Deployment > Compiler > Java Compiler
The default properties for setting up all connection details are hidden in modes. Each connection mode has meaningful defaults that will make your life easier.
The mode is set on camunda.client.mode
and can be simple
, oidc
or saas
. Further usage of each mode is explained below.
Zeebe will now also be configured with an URL (
http://localhost:26500
instead oflocalhost:26500
+ plaintext connection flag)
Connections to Camunda SaaS can be configured by creating the following entries in your src/main/resources/application.yaml
:
camunda:
client:
mode: saas
auth:
client-id: <your client id>
client-secret: <your client secret>
cluster-id: <your cluster id>
region: <your cluster region>
If you set up a local dev cluster, your applications will use a cookie to authenticate. As long as the port config is default, there is nothing to configure rather than the according spring profile:
camunda:
client:
mode: simple
If you have different endpoints for your applications, disable a client or adjust the username or password used, you can configure this:
camunda:
client:
mode: simple
auth:
username: demo
password: demo
zeebe:
enabled: true
gateway-url: http://localhost:26500
base-url: http://localhost:8080
prefer-rest-over-grpc: false
operate:
enabled: true
base-url: http://localhost:8081
tasklist:
enabled: true
base-url: http://localhost:8082
If you set up a self-managed cluster with identity, keycloak is used as default identity provider. As long as the port config (from docker-compose or port-forward with the helm charts) is default, you need to configure the according spring profile plus client credentials:
camunda:
client:
mode: oidc
auth:
client-id: <your client id>
client-secret: <your client secret>
If you have different endpoints for your applications or want to disable a client, you can configure this:
camunda:
client:
mode: oidc
tenant-ids:
- <default>
auth:
oidc-type: keycloak
issuer: http://localhost:18080/auth/realms/camunda-platform
zeebe:
enabled: true
gateway-url: http://localhost:26500
base-url: http://localhost:8080
prefer-rest-over-grpc: false
audience: zeebe-api
operate:
enabled: true
base-url: http://localhost:8081
audience: operate-api
tasklist:
enabled: true
base-url: http://localhost:8082
audience: tasklist-api
optimize:
enabled: true
base-url: http://localhost:8083
audience: optimize-api
identity:
enabled: true
base-url: http://localhost:8084
audience: identity-api
You can inject the ZeebeClient and work with it, e.g. to create new workflow instances:
@Autowired
private ZeebeClient client;
Use the @Deployment
annotation:
@SpringBootApplication
@EnableZeebeClient
@Deployment(resources = "classpath:demoProcess.bpmn")
public class MySpringBootApplication {
This annotation internally uses the Spring resource loader mechanism which is pretty powerful and can for example also deploy multiple files at once:
@Deployment(resources = {"classpath:demoProcess.bpmn" , "classpath:demoProcess2.bpmn"})
or define wildcard patterns:
@Deployment(resources = "classpath*:/bpmn/**/*.bpmn")
@JobWorker(type = "foo")
public void handleJobFoo(final ActivatedJob job) {
// do whatever you need to do
}
See documentation below for a more in-depth discussion on parameters and configuration options of job workers.
You can start up an in-memory test engine and do assertions by adding this Maven dependency:
<dependency>
<groupId>io.camunda</groupId>
<artifactId>spring-zeebe-test</artifactId>
<version>${spring-zeebe.version}</version>
<scope>test</scope>
</dependency>
Note that the test engines requires Java version >= 21. If you cannot run on this Java version, you can use Testcontainers instead. Testcontainers require that you have a Docker installation locally available on the developer machine. Use this dependency:
<!--
Alternative dependency if you cannot run Java 21, so you will leverage Testcontainer
Make sure NOT to have spring-zeebe-test on the classpath in parallel!
-->
<dependency>
<groupId>io.camunda</groupId>
<artifactId>spring-zeebe-test-testcontainer</artifactId>
<version>${spring-zeebe.version}</version>
<scope>test</scope>
</dependency>
Using Maven profiles you can also switch the test dependencies based on the available Java version.
Then, start up the test engine in your test case by adding @ZeebeSpringTest
@SpringBootTest
@ZeebeSpringTest
public class TestMyProcess {
// ...
An example test case is available here.
Please do not use
zeebeTestEngine.waitForBusyState(...)
to wait for a timer. This will not work as this is also triggered by an incoming job activation.
Spring Zeebe project previously included the Runtime for Camunda 8 Connectors. It has been moved to a separate Connectors project. To run Connectors, you can now use the following dependency in your project:
<dependency>
<groupId>io.camunda.connector</groupId>
<artifactId>spring-boot-starter-camunda-connectors</artifactId>
<version>${connectors.version}</version>
</dependency>
To configure the Connector Runtime use the properties explained here: Camunda Connector Runtime
If you have previously used the pure Spring Zeebe project to run Connectors, you should migrate to the new dependency.
You can find the latest version of Connectors on this page. Consult the Connector SDK for details on Connectors in general.
You can inject the CamundaOperateClient and work with it, e.g. to getting and searching process instances:
@Autowired
private CamundaOperateClient client;
You can configure the job type via the JobWorker
annotation:
@JobWorker(type = "foo")
public void handleJobFoo() {
// handles jobs of type 'foo'
}
If you don't specify the type
the method name is used as default:
@JobWorker
public void foo() {
// handles jobs of type 'foo'
}
As a third possibility, you can set a default job type:
camunda:
client:
zeebe:
defaults:
type: foo
This is used for all workers that do not set a task type via the annotation.
You can specify that you only want to fetch some variables (instead of all) when executing a job, which can decrease load and improve performance:
@JobWorker(type = "foo", fetchVariables={"variable1", "variable2"})
public void handleJobFoo(final JobClient client, final ActivatedJob job) {
String variable1 = (String)job.getVariablesAsMap().get("variable1");
System.out.println(variable1);
// ...
}
By using the @Variable
annotation there is a shortcut to make variable retrieval simpler, including the type cast:
@JobWorker(type = "foo")
public void handleJobFoo(final JobClient client, final ActivatedJob job, @Variable String variable1) {
System.out.println(variable1);
// ...
}
You can also use your own class into which the process variables are mapped to (comparable to getVariablesAsType()
in the Java Client API). Therefore, use the @VariablesAsType
annotation. In the below example, MyProcessVariables
refers to your own class:
@JobWorker(type = "foo")
public ProcessVariables handleFoo(@VariablesAsType MyProcessVariables variables){
// do whatever you need to do
variables.getMyAttributeX();
variables.setMyAttributeY(42);
// return variables object if something has changed, so the changes are submitted to Zeebe
return variables;
}
You can access variables of a process via the ActivatedJob object, which is passed into the method if it is a parameter:
@JobWorker(type = "foo")
public void handleJobFoo(final ActivatedJob job) {
String variable1 = (String)job.getVariablesAsMap().get("variable1");
sysout(variable1);
// ...
}
With @Variable
, @VariablesAsType
or fetchVariables
you limit which variables are loaded from the workflow engine. You can also override this and force that all variables are loaded anyway:
@JobWorker(type = "foo", fetchAllVariables = true)
public void handleJobFoo(@Variable String variable1) {
}
Implicit fetchVariables
(with @Variable
or @VariablesAsType
) will be disabled as soon as you inject yourself the ActivatedJob
.
By default, the autoComplete
attribute is set to true
for any job worker.
Note that the described default behavior of auto-completion was introduced with 8.1 and was different before, see #239 for details.
In this case, the Spring integration will take care about job completion for you:
@JobWorker(type = "foo")
public void handleJobFoo(final ActivatedJob job) {
// do whatever you need to do
// no need to call client.newCompleteCommand()...
}
Which is the same as:
@JobWorker(type = "foo", autoComplete = true)
public void handleJobFoo(final ActivatedJob job) {
// ...
}
Note that the code within the handler method needs to be synchronously executed, as the completion will be triggered right after the method has finished.
When using autoComplete
you can:
- Return a
Map
,String
,InputStream
, orObject
, which then will be added to the process variables - Throw a
ZeebeBpmnError
which results in a BPMN error being sent to Zeebe - Throw any other
Exception
that leads in a failure handed over to Zeebe
@JobWorker(type = "foo")
public Map<String, Object> handleJobFoo(final ActivatedJob job) {
// some work
if (successful) {
// some data is returned to be stored as process variable
return variablesMap;
} else {
// problem shall be indicated to the process:
throw new ZeebeBpmnError("DOESNT_WORK", "This does not work because...");
}
}
Your job worker code can also complete the job itself. This gives you more control about when exactly you want to complete the job (e.g. allowing the completion to be moved to reactive callbacks):
@JobWorker(type = "foo", autoComplete = false)
public void handleJobFoo(final JobClient client, final ActivatedJob job) {
// do whatever you need to do
client.newCompleteCommand(job.getKey())
.send()
.exceptionally( throwable -> { throw new RuntimeException("Could not complete job " + job, throwable); });
}
Ideally, you don't use blocking behavior like send().join()
, as this is a blocking call to wait for the issues command to be executed on the workflow engine. While this is very straightforward to use and produces easy-to-read code, blocking code is limited in terms of scalability.
That's why the worker above showed a different pattern (using exceptionally
), often you might also want to use the whenComplete
callback:
send().whenComplete((result, exception) -> {})
This registers a callback to be executed if the command on the workflow engine was executed or resulted in an exception. This allows for parallelism. This is discussed in more detail in this blog post about writing good workers for Camunda Cloud.
Note that when completing jobs programmatically, you must specify autoComplete = false
. Otherwise, there is a race condition between your programmatic job completion and the Spring integration job completion, this can lead to unpredictable results.
You can use the @CustomHeaders
annotation for a parameter to retrieve custom headers for a job:
@JobWorker(type = "foo")
public void handleFoo(@CustomHeaders Map<String, String> headers){
// do whatever you need to do
}
Of course, you can combine annotations, for example @VariablesAsType
and @CustomHeaders
@JobWorker
public ProcessVariables foo(@VariablesAsType ProcessVariables variables, @CustomHeaders Map<String, String> headers){
// do whatever you need to do
return variables;
}
Whenever your code hits a problem that should lead to a BPMN error being raised, you can simply throw a ZeebeBpmnError providing the error code used in BPMN:
@JobWorker(type = "foo")
public void handleJobFoo() {
// some work
if (!successful) {
// problem shall be indicated to the process:
throw new ZeebeBpmnError("DOESNT_WORK", "This does not work because...");
}
}
If you don't want to use a ZeebeClient for certain scenarios, you can switch it off by setting:
camunda:
client:
zeebe:
enabled: false
If you build a worker that only serves one thing, it might also be handy to define the worker job type globally - and not in the annotation:
camunda:
client:
zeebe:
defaults:
type: foo
Number of jobs that are polled from the broker to be worked on in this client and thread pool size to handle the jobs:
camunda:
client:
zeebe:
defaults:
max-jobs-active: 32
execution-threads: 1
For a full set of configuration options please see CamundaClientProperties.java
Note that we generally do not advise to use a thread pool for workers, but rather implement asynchronous code, see Writing Good Workers.
If you need to customize the ObjectMapper that the Zeebe client uses to work with variables, you can declare a bean with type io.camunda.zeebe.client.api.JsonMapper
like this:
@Configuration
class MyConfiguration {
@Bean
public JsonMapper jsonMapper() {
ObjectMapper objectMapper = new ObjectMapper()
.configure(DeserializationFeature.ACCEPT_EMPTY_ARRAY_AS_NULL_OBJECT, true);
new ZeebeObjectMapper(objectMapper);
}
}
You can disable workers via the enabled
parameter of the @JobWorker
annotation :
class SomeClass {
@JobWorker(type = "foo", enabled = false)
public void handleJobFoo() {
// worker's code - now disabled
}
}
You can also override this setting via your application.yaml
file:
camunda:
client:
zeebe:
override:
foo:
enabled: false
This is especially useful, if you have a bigger code base including many workers, but want to start only some of them. Typical use cases are
- Testing: You only want one specific worker to run at a time
- Load Balancing: You want to control which workers run on which instance of cluster nodes
- Migration: There are two applications, and you want to migrate a worker from one to another. With this switch, you can simply disable workers via configuration in the old application once they are available within the new.
To disable all workers, but still have the zeebe client available, you can use:
camunda:
client:
zeebe:
defaults:
enabled: false
You can override the JobWorker
annotation's values, as you could see in the example above where the enabled
property is overridden:
camunda:
client:
zeebe:
override:
foo:
enabled: false
In this case, foo
is the type of the worker that we want to customize.
You can override all supported configuration options for a worker, e.g.:
camunda:
client:
zeebe:
override:
foo:
timeout: PT10S
You could also provide a custom class that can customize the JobWorker
configuration values by implementing the io.camunda.zeebe.spring.client.annotation.customizer.ZeebeWorkerValueCustomizer
interface.
Please read about this feature in the docs upfront.
To control job streaming on the zeebe client, you can configure it:
camunda:
client:
zeebe:
defaults:
stream-enabled: true
This also works for every worker individual:
camunda:
client:
zeebe:
override:
foo:
stream-enabled: true
When using multi-tenancy, the zeebe client will connect to the <default>
tenant. To control this, you can configure:
camunda:
client:
tenant-ids:
- <default>
- foo
Additionally, you can set tenant ids on job worker level by using the annotation:
@JobWorker(tenantIds="myOtherTenant")
You can override this property as well:
camunda:
client:
zeebe:
override:
foo:
tenants-ids:
- <default>
- foo
Spring-zeebe-starter will provide some out-of-the-box metrics, that can be leveraged via Spring Actuator. Whenever actuator is on the classpath, you can access the following metrics:
camunda.job.invocations
: Number of invocations of job workers (tagging the job type)camunda.connector.inbound.invocations
: Number of invocations of any inbound connectors (tagging the connector type)camunda.connector.outbound.invocations
: Number of invocations of any outbound connectors (tagging the connector type)
For all of those metrics, the following actions are recorded:
activated
: The job/connector was activated and started to process an itemcompleted
: The processing was completed successfullyfailed
: The processing failed with some exceptionbpmn-error
: The processing completed by throwing an BpmnError (which means there was no technical problem)
In a default setup, you can enable metrics to be served via http:
management:
endpoints:
web:
exposure:
include: metrics
And then access them via http://localhost:8080/actuator/metrics/.
This project adheres to the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior as soon as possible.