Skip to content
Prometheus exporter for Kafka cluster state stored in ZooKeeper
Go Makefile Dockerfile
Branch: master
Clone or download
bobrik Merge pull request #10 from raphting/md_chroot
Change chroot description
Latest commit 3019877 Feb 4, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
Dockerfile Update default port to 9381 Jul 18, 2017
LICENSE Add LICENSE file Jul 11, 2017
collector.go Drop static labels from metrics Jul 21, 2017
main.go Rename 'target' to 'zookeeper' and add it as static labels to every m… Jul 19, 2017
metrics.go Don't iterate topics/partitions one by one, use goroutines Jul 19, 2017

Kafka ZooKeeper Exporter

A daemon that exposes Kafka cluster state stored in ZooKeeper.


Metrics exported by kafka_zookeeper_exporter provide cluster level overview of the entire cluster and can be used along jmx_exporter which provides broker level data. jmx_exporter exports what each brokers believes to be true, but this information can be incorrect in case of a network partition or other split brain issues. ZooKeeper on the other hand is the source of truth for the entire cluster configuration and runtime status, so the metrics exported from it are the best representation of the entire cluster status.



Number of partitions configured for given topic.


Number of replicas configured for given topic.


This metric will have value 1 for the replica that is currently the leader for given partition.


Each Kafka partition have a list of replicas, the first replica is the preferred (default) leader. This metric will have value 1 if the current partition leader is the preferred one.


This metric will indicate whenever given replica is in sync with the partition leader.


go get -u
cd $GOPATH/src/


Start the exporter

./kafka_zookeeper_exporter <flags>

To see the list of avaiable flags run

./kafka_zookeeper_exporter -h

Send a request to collect metrics

curl localhost:9381/kafka?zookeeper=,mytopic2


  • zookeeper - required, address of the ZooKeeper used for Kafka, can be multiple addresses separated by comma
  • chroot - path inside ZooKeeper where Kafka cluster data is stored. Has to be omitted if Kafka resides in the root of ZooKeeper.
  • topics - optional, list of topics to collect metrics for, if empty or missing then all topics will be collected

Prometheus configuration

Example Prometheus scrape job configuration:

- job_name: kafka_zookeeper_exporter_mycluster
    - targets:
      # hostname and port `kafka-zookeeper-exporter` is listening on
      - myserver:9381
  metrics_path: /kafka
  scheme: http
    zookeeper: [',']
    chroot: ['/kafka/mycluster']

This example uses static_configs to configure scrape target. See Prometheus docs for other ways to configure it.

You can’t perform that action at this time.