A simple implementation of a Kafka cluster (3 brokers) with 1 producer and 1 consumer, deployed with Docker.
This project includes :
- 1 Kafka cluster of 3 brokers
- 1 ZooKeeper cluster of 3 workers
- 1 Confluent Control Center or Kowl (Kafka UI)
- 1 Python example of producer
- 1 Python example of consumer
We won't use Kafka Schema Registry and Avro in this implementation. To maintain consistency among your data formats, I recommend you to directly use DataClasses if both your producers and consumers run with Python.
First, you need to setup your cluster to be accessible from other containers on your computer. Get your Docker network interface IP or external IPv4 :
ip a | grep docker0
In the image above, the IP to choose is
192.168.254.1
Set it changing YOUR_IP
inside .env
, producer/.env
and consumer/.env
Interestingly, I was not able to make Kafka listen on
0.0.0.0
as it triggers an error. That's why we need to specify the exact IP of our machine.
You have two choices : starting the open-source Kafka version or the Enterprise one. The latter will allow you to benefit from all the features in the Confluent Control Center, especially the metrics.
To start the open-source cluster :
docker-compose up -d
To start the enterprise cluster :
docker-compose -f docker-compose.enterprise.yml up -d
Let's run our producer. It will push a random sentence in the sentences
topic every 3 seconds.
docker-compose -f ./producer/docker-compose.yml up
Let's run our consumer. It will print messages when received :
docker-compose -f ./consumer/docker-compose.yml up
Connect to localhost:8080
to visualize your cluster activity
Connect to
localhost:9021
to visualize the Enterprise cluster activity
- Provide a fully working example of an SSL configuration
- Provide an example on how to delete a topic