Skip to content
Kafka Interceptor for Zipkin
Java Shell Makefile
Branch: master
Clone or download

README.md

Kafka Interceptor: Zipkin

Build Status

Kafka Consumer and Producer Interceptor to record tracing data.

This interceptors could be added to Kafka Connectors via configuration and to other off-the-shelf components like Kafka REST Proxy, KSQL and so on.

Installation

Producer Interceptor

Producer Interceptor create spans on sending records. This span will only represent the time it took to execute the on_send method provided by the API, not how long to send the actual record, or any other latency.

Kafka Clients

Add Interceptor to Producer Configuration:

    producerConfig.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, Collections.singletonList(TracingProducerInterceptor.class));

Consumer Interceptor

Consumer Interceptor create spans on consumption of records. This span will only represent the time it took execute the on_consume method provided by the API, not how long it took to commit, or any other latency.

Kafka Clients

    consumerConfig.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, Collections.singletonList(TracingConsumerInterceptor.class));

Configuration

Key Value
zipkin.sender.type Sender type: NONE(default), KAFKA, HTTP
zipkin.encoding Zipkin encoding: JSON(default), PROTO3.
zipkin.http.endpoint Zipkin HTTP Endpoint sender.
zipkin.kafka.bootstrap.servers Bootstrap Servers list to send Spans. if not present, bootstrap.servers (Kafka Client property) is used.
zipkin.local.service.name Application Service name used to tag span. Default: kafka-client.
zipkin.trace.id.128bit.enabled Trace ID 128 bit enabled, default: true
zipkin.sampler.rate Rate to sample spans. Default: 1.0

How to test it

Start Docker Compose docker-compose.yml

docker-compose up -d

Steps to test:

  1. Navigate to http://localhost:8080 and login using postgres as server, postgres as username and example as password

  2. Create a table source_table with an auto-increment id and name field

  3. Once table is created deploy source and sink connectors using Makefile:

make docker-kafka-connectors
  1. Insert values to the table and check the traces.

  2. Create a Stream in KSQL:

ksql http://localhost:8088
 CREATE STREAM source_stream (id BIGINT, name VARCHAR) WITH (KAFKA_TOPIC='jdbc_source_table', VALUE_FORMAT='JSON');
  1. Check traces:

You can’t perform that action at this time.