Skip to content
This repository has been archived by the owner on Jul 9, 2022. It is now read-only.


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

spring-cloud-dataflow-metrics-collector is no longer actively maintained by VMware, Inc.

Spring Cloud Data Flow Metrics Collector

The metrics collector is a companion application to the Spring Cloud Data Flow server.

It collects metrics emitted by Spring Cloud Stream apps, and groups them together around the stream definition that dataflow used to deploy them.

All of the OOB Spring Cloud App Starters currently have the metrics emitter module already bundled on it. You can enable metric emission by setting the destination name for metrics binding, e.g.<DESTINATION_NAME>

Starting with Spring Cloud Stream 2.0, the default metrics support has been switched to use Micrometer. See Metrics Emitter for more details. For information on how to configure the metrics-emitter for Spring Cloud Stream 1.x applications, see here.

The metrics collector 2.x support collecting metrics from both Spring Cloud Stream 1.x and 2.x applications.

Uber Jar Http Link Docker Hub Link

metrics-collector-rabbit [docker pull metrics-collector-rabbit:1.0.0.RELEASE]

metrics-collector-kafka-09 [docker pull metrics-collector-kafka-09:1.0.0.RELEASE]

metrics-collector-kafka-10 [docker pull metrics-collector-kafka-10:1.0.0.RELEASE]


Because apps could use different binder implementations (RabbitMQ, Kafka, JMS), the collector is built using the same support provided by the App Starters to create an executable uber jar artifact per binder.

To build you need first generate the source code for the apps of each binder:

./mvnw clean install -PgenerateApps

This will generate an apps folder like the one below:

├── metrics-collector-kafka
├── metrics-collector-rabbit
└── pom.xml

So, let’s assume your environment have apps deployed using RabbitMQ, cd into the metrics-collector-rabbit folder and run

./mvnw package

You should find the uber jar of the collector inside your target folder.


The collector is an uber jar following the same principles of any Spring Cloud Stream app. So it means, that you need to provide connection information to the broker that you are using. Just follow the instructions on Spring Cloud Stream docs on how to configure each binder. If you are deploying on a platform such as cloudfoundry, you only need to bind a rabbitmq service to the collector.


The default destination that the collector listens to is named metrics. You can override this default by setting the property<DESTINATION_NAME> This should match the destination that Spring Cloud Stream applications use to send metrics, which is set using the property<DESTINATION_NAME>

Assuming apps have been deployed configuring For the RabbitMQ binder, the collector will create an anonymous consumer to an exchange called metrics. For the Kafka binder, the collector creates a Kafka topic named metrics.

Controlling eviction

Internally the collector maintains a cache of the metrics it receives. The default for metric emission is every 60 seconds SCSt 2.x and 5 seconds for SCSt 1.x applications, but can be tuned on the application by using Spring Boot’s metrics exporter scheduling control, please refer to the docs here to configure your applications.

The default for the collector is to evict any metric reading that was not updated over the past 90 seconds so that SCSt 1.x and 2.x applications are both supported. You can change this value by setting the property Use the java.util.Duration notation as the value, e.g. 10s.

Note: It is important that the eviction time is set to a value higher than the emission time.


The collector will have security enabled by default. You can specify the username and password using the Spring Boot 2.0 properties and

E2E Cheat sheet

The following is just a sample of commands that one can use to get the collector up and running and see some metrics on the dataflow UI.

java -jar target/metrics-collector-rabbit-2.0.0.BUILD-SNAPSHOT.jar

java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.5.0.BUILD-SNAPSHOT.jar

Register SCSt 1.x Apps:
app import --uri

stream create --name foostream --definition "time | log"
stream deploy --name foostream --properties "deployer.*.count=2,app.*"


A companion metrics application to Spring Cloud Data Flow that helps with real-time monitoring







No packages published