Skip to content
RabbitMQ to Kafka bridge
Branch: master
Clone or download
Latest commit 9028197 Apr 16, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Prepare to open source kandalf Nov 15, 2016
assets Added test with rabbitTransientExchange missing Jul 30, 2018
build Added errors monitoring Jul 13, 2017
cmd/kandalf Moved to godep, bumped libraries versions Mar 22, 2018
.gitignore Added pipeline file, vars and instructions for CI Nov 7, 2016
.pullapprove.yml Weird Github Client commiting pullapprove.yml May 31, 2016
.travis.yml Fixed go version in travis config Jul 30, 2018 PLAT-301 Updated readme and other docs Mar 28, 2017 PLAT-301 Updated readme and other docs Mar 28, 2017
Dockerfile PLAT-369 Removing CI stuff from project source code Apr 27, 2017
Gopkg.lock Replaced satori uuid with gofrs Jan 24, 2019
Gopkg.toml Replaced satori uuid with gofrs Jan 24, 2019
Makefile update repo path of golint Mar 29, 2019
doc.go PLAT-301 Migrated to viper for config files and cobra for app flags Mar 31, 2017
docker-compose.yml PLAT-301 Reverted pipeline and docker-compose screwed during development Apr 3, 2017


Build Status codecov GoDoc Go Report Card

RabbitMQ to Kafka bridge

The main idea is to read messages from provided exchanges in RabbitMQ and send them to Kafka.

Application uses intermediate permanent storage for keeping read messages in case of Kafka unavailability.

Service is written in Go language and can be build with go compiler of version 1.6 and above.


Application configuration

Application is configured with environment variables or config files of different formats - JSON, TOML, YAML, HCL, and Java properties.

By default it tries to read config file from /etc/kandalf/conf/config.<ext> and ./config.<ext>. You can change the path using -c <file_path> or --config <file_path> application parameters. If file is not found config loader does fallback to reading config values from environment variables.

Environment variables

  • RABBIT_DSN - RabbiMQ server DSN
  • STORAGE_DSN - Permanent storage DSN, where Scheme is storage type. The following storage types are currently supported:
    • Redis - requires, key as DSN query parameter as redis storage key, e.g. redis://localhost:6379/?key=kandalf
  • LOG_* - Logging settings, see hellofresh/logging-go for details
  • KAFKA_BROKERS - Kafka brokers comma-separated list, e.g.,
  • KAFKA_MAX_RETRY - Total number of times to retry sending a message to Kafka (default: 5)
  • KAFKA_PIPES_CONFIG - Path to RabbitMQ-Kafka bridge mappings config, see details below (default: /etc/kandalf/conf/pipes.yml)
  • STATS_DSN - Stats host, see hellofresh/stats-go for usage details.
  • STATS_PREFIX - Stats prefix, see hellofresh/stats-go for usage details.
  • WORKER_CYCLE_TIMEOUT - Main application bridge worker cycle timeout to avoid CPU overload, must be valid duration string (default: 2s)
  • WORKER_CACHE_SIZE - Max messages number that we store in memory before trying to publish to Kafka (default: 10)
  • WORKER_CACHE_FLUSH_TIMEOUT - Max amount of time we store messages in memory before trying to publish to Kafka, must be valid duration string (default: 5s)
  • WORKER_STORAGE_READ_TIMEOUT - Timeout between attempts of reading persisted messages from storage, to publish them to Kafka, must be at least 2x greater than WORKER_CYCLE_TIMEOUT, must be valid duration string (default: 10s)
  • WORKER_STORAGE_MAX_ERRORS - Max storage read errors in a row before worker stops trying reading in current read cycle. Next read cycle will be in WORKER_STORAGE_READ_TIMEOUT interval. (default: 10)

Config file (YAML example)

Config should have the following structure:

logLevel: "info"                                    # same as env LOG_LEVEL
rabbitDSN: "amqp://user:password@rmq"               # same as env RABBIT_DSN
storageDSN: "redis://redis.local/?key=storage:key"  # same as env STORAGE_DSN
  brokers:                                          # same as env KAFKA_BROKERS
    - ""
    - ""
  maxRetry: 5                                       # same as env KAFKA_MAX_RETRY
  pipesConfig: "/etc/kandalf/conf/pipes.yml"        # same as env KAFKA_PIPES_CONFIG
  dsn: "statsd.local:8125"                          # same as env STATS_DSN
  prefix: "kandalf"                                 # same as env STATS_PREFIX
  cycleTimeout: "2s"                                # same as env WORKER_CYCLE_TIMEOUT
  cacheSize: 10                                     # same as env WORKER_CACHE_SIZE
  cacheFlushTimeout: "5s"                           # same as env WORKER_CACHE_FLUSH_TIMEOUT
  storageReadTimeout: "10s"                         # same as env WORKER_STORAGE_READ_TIMEOUT
  storageMaxErrors: 10                              # same as env WORKER_STORAGE_MAX_ERRORS

You can find sample config file in assets/config.yml.

Pipes configuration

The rules, defining which messages should be send to which Kafka topics, are defined in Kafka Pipes Config file and are called "pipes". Each pipe has the following structure:

- kafkaTopic: "loyalty"                                # name of the topic in Kafka where message will be sent
  rabbitExchangeName: "customers"                      # name of the exchange in RabbitMQ
  rabbitTransientExchange: false                       # determines if the exchange should be declared as durable or transient
  rabbitRoutingKey: "badge.received"                   # routing key for exchange
  rabbitQueueName: "kandalf-customers-badge.received"  # the name of RabbitMQ queue to read messages from
  rabbitDurableQueue: true                             # determines if the queue should be declared as durable
  rabbitAutoDeleteQueue: false                         # determines if the queue should be declared as auto-delete

You can find sample Kafka Pipes Config file in assets/pipes.yml.

How to build a binary on a local machine

  1. Make sure that you have go and make utility installed on your machine;
  2. Run: make to install all required dependencies and build binaries;
  3. Binaries for Linux and MacOS X would be in ./dist/.

How to run service in a docker environment

For testing and development you can use docker-compose file with all the required services.

For production you can use minimalistic prebuilt hellofresh/kandalf image as base image or mount pipes configuration volume to /etc/kandalf/conf/.


  • Handle dependencies in a proper way (gvt, glide or smth.)
  • Tests


To start contributing, please check CONTRIBUTING.


You can’t perform that action at this time.