Skip to content
Kafka Interceptor for Zipkin
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.mvn/wrapper
docs
examples
src
.gitignore
.travis.yml
LICENSE
Makefile
README.md
docker-compose-connectors.yml
docker-compose-ksql.yml
docker-compose.yml
mvnw
mvnw.cmd
pom.xml

README.md

Kafka Interceptor: Zipkin

Build Status

Kafka Consumer and Producer Interceptor to record tracing data.

This interceptors could be added to Kafka Connectors via configuration and to other off-the-shelf components like Kafka REST Proxy, KSQL and so on.

Installation

Producer Interceptor

Producer Interceptor create spans on sending records. This span will only represent the time it took to execute the on_send method provided by the API, not how long to send the actual record, or any other latency.

Kafka Clients

Add Interceptor to Producer Configuration:

    producerConfig.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, Collections.singletonList(TracingProducerInterceptor.class));

Consumer Interceptor

Consumer Interceptor create spans on consumption of records. This span will only represent the time it took execute the on_consume method provided by the API, not how long it took to commit, or any other latency.

Kafka Clients

    consumerConfig.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, Collections.singletonList(TracingConsumerInterceptor.class));

Configuration

Key Value
zipkin.sender.type Sender type: NONE(default), KAFKA, HTTP
zipkin.encoding Zipkin encoding: JSON(default), PROTO3.
zipkin.http.endpoint Zipkin HTTP Endpoint sender.
zipkin.kafka.bootstrap.servers Bootstrap Servers list to send Spans. if not present, bootstrap.servers (Kafka Client property) is used.
zipkin.local.service.name Application Service name used to tag span. Default: kafka-client.
zipkin.trace.id.128bit.enabled Trace ID 128 bit enabled, default: true
zipkin.sampler.rate Rate to sample spans. Default: 1.0

How to test it

Start Docker Compose docker-compose.yml

docker-compose up -d

And to test how it works with Kafka Connectors and KSQL, other composes can be started:

docker-compose -f docker-compose.yml -f docker-compose-ksql.yml -f docker-compose-connectors.yml up -d

Steps to test:

  1. Create a Table "source_table" in the Postgres Database in http://localhost:18080

  2. Once database is created deploy source and sink connectors using Makefile:

make source-connector
make sink-connector
  1. Insert values to the table and check the traces.

  2. Create a Stream in KSQL:

ksql http://localhost:8088
 CREATE STREAM source_stream (id BIGINT, name VARCHAR) WITH (KAFKA_TOPIC='jdbc_source_table', VALUE_FORMAT='JSON');
  1. Check traces:

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.