This project demonstrates a production-ready Spring Boot application integrated with a multi-node Apache Kafka cluster running in KRaft mode (without Zookeeper). The application showcases three different Kafka consumer patterns: manual acknowledgment, Kafka transactions, and database transactions with PostgreSQL.
This is a comprehensive demonstration of Kafka integration patterns in a microservices architecture. The project consists of multiple components working together to illustrate different approaches to message processing and transaction management.
├── docker/
│ └── docker-compose.yml # Multi-node Kafka cluster + PostgreSQL + springboot docker
│
├── springboot/
│ ├── core/ # Shared models and configurations
│ ├── producer/ # REST API and Kafka producer
│ ├── consumer-1/ # Manual acknowledgment consumer
│ ├── consumer-2/ # Kafka transaction consumer
│ ├── consumer-3/ # Database transaction consumer
│ └── pom.xml # Parent POM
├── test/
│ ├── send-task.sh # Script to send valid task
│ ├── send-error-task.sh # Script to test error handling
│ └── kafka-test-client.mjs # Direct Kafka producer script
└── README.md
- Docker and Docker Compose
- Java 21(optional, for running locally)
- Maven(optional, for running locally)
- Node.js (optional, for running the Kafka test client)
- curl (for testing REST endpoints)
The project is organized as a multi-module Maven project with the following components:
Shared library containing:
- Common models and DTOs (
TaskEvent
,TaskEntity
) - Kafka consumer configuration
- Global exception handling
- Transformers and utilities
A Spring Boot REST API service that:
- Exposes a
/api/task
endpoint to receive task creation requests - Publishes task events to the Kafka
tasks
topic - Runs on port
8080
Three independent consumer services, each demonstrating a different message processing pattern:
- Group ID:
manual
- Pattern: Manual acknowledgment with
Acknowledgment.acknowledge()
- Use Case: Fine-grained control over when messages are marked as consumed
- Behavior: Simulates random processing failures to demonstrate retry behavior
- Port:
8081
- Group ID:
ktx
- Pattern: Kafka transactions using
@Transactional(value = "kafkaTransactionManager")
- Use Case: Atomic message processing with Kafka's transactional guarantees
- Behavior: Simulates random processing failures to demonstrate transaction rollback
- Port:
8082
- Group ID:
database
- Pattern: Database transactions using
@Transactional(transactionManager = "postgresTransactionManager")
- Use Case: Atomic message processing with database persistence
- Behavior: Saves tasks to PostgreSQL and simulates random failures to demonstrate transaction rollback
- Port:
8083
- Database: PostgreSQL
A highly available, multi-node Kafka cluster configured with:
- 3 Controller Nodes: Manage cluster metadata (KRaft mode - no Zookeeper required)
- 3 Broker Nodes: Handle message storage and replication
- Broker-1:
localhost:9092
- Broker-2:
localhost:9094
- Broker-3:
localhost:9096
- Broker-1:
- KRaft Mode: Uses Kafka's built-in consensus protocol (no Zookeeper dependency)
- Replication Factor: 3 (each partition is replicated across 3 brokers)
- Min In-Sync Replicas: 2 (at least 2 replicas must acknowledge writes)
- Transaction State Log Replication: 3 (for transactional guarantees)
This configuration ensures:
- The cluster can tolerate 1 broker failure without data loss
- Strong durability guarantees for messages
- High availability for both reads and writes
- Used by Consumer-3 for persistent storage
- Port:
5432
- Database:
default
- User:
apiuser
You can run the application using Docker or by running the components locally.
This is the simplest way to get started. Docker Compose will start the entire Kafka cluster (3 controllers + 3 brokers), PostgreSQL, and all Spring Boot services.
-
Start All Services: Navigate to the
docker
directory and run:cd docker docker-compose up -d
This will start:
- 3 Kafka controller nodes
- 3 Kafka broker nodes(port 9092, 9094, 9096)
- PostgreSQL database(port 5432)
- Producer service (port 8080)
- Consumer-1 service (port 8081)
- Consumer-2 service (port 8082)
- Consumer-3 service (port 8083)
-
Verify Services: Check that all services are running:
docker-compose ps
Use the send-task.sh
script to send a POST request to the producer's /api/task
endpoint. This script generates a random UUID for the taskId
.
Run the script:
cd test
./send-task.sh
Expected Response:
{
"status": 200,
"message": "Task creation initiated",
"data": {
"taskId": "550e8400-e29b-41d4-a716-446655440000",
"taskName": "Sample Task",
"taskDescription": "This is a sample task"
}
}
Each consumer processes the same message independently. You can monitor their logs to see different processing patterns:
Controller or Broker:
docker logs -f controller-1
docker logs -f controller-2
docker logs -f controller-3
docker logs -f broker-1
docker logs -f broker-2
docker logs -f broker-3
Producer/ Consumer:
docker logs -f producer
docker logs -f consumer-1
docker logs -f consumer-2
docker logs -f consumer-3
Note: Each consumer simulates random processing failures (50% chance) to demonstrate retry and transaction rollback behavior. You may see error logs followed by retry attempts.
For Consumer-3, you can verify that tasks are saved to PostgreSQL:
docker exec -it postgres psql -U apiuser -d default -c "SELECT * FROM tasks;"
The send-error-task.sh
script sends a payload with an incorrect format to test validation and error handling:
cd test
./send-error-task.sh
You can use the kafka-test-client.mjs
script to produce messages directly to the tasks
Kafka topic, bypassing the REST API.
First, install the dependencies:
cd test
npm install
Then, run the script:
node kafka-test-client.mjs
To stop all Docker services:
cd docker
docker-compose down
To stop and remove all data (including Kafka topics and PostgreSQL data):
cd docker
docker-compose down -v
All services (Producer and Consumers) expose Spring Boot Actuator endpoints for monitoring and management.
View effective Kafka consumer/producer configurations and runtime container state. This custom actuator endpoint provides detailed information about Kafka settings and listener containers.
# Producer - View producer and admin configurations
curl http://localhost:8080/monitor/kafkaProps
# Consumer-1 - View consumer configurations and listener state
curl http://localhost:8081/monitor/kafkaProps
# Consumer-2 - View consumer configurations with transaction manager
curl http://localhost:8082/monitor/kafkaProps
# Consumer-3 - View consumer configurations with database transaction manager
curl http://localhost:8083/monitor/kafkaProps
List all running Kafka brokers:
docker exec -it broker-1 bash
/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server broker-1:19092
List all topics:
/opt/kafka/bin/kafka-topics.sh --bootstrap-server broker-1:19092 --list
Describe the tasks
topic:
/opt/kafka/bin/kafka-topics.sh --bootstrap-server broker-1:19092 --describe --topic tasks
View consumer group status:
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --list
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --describe --group manual
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --describe --group ktx
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --describe --group database
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Arata - arata.stack@gmail.com