Skip to content

arata-x/springboot-kafka-cluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spring Boot Kafka Multi-Node Cluster Demo

This project demonstrates a production-ready Spring Boot application integrated with a multi-node Apache Kafka cluster running in KRaft mode (without Zookeeper). The application showcases three different Kafka consumer patterns: manual acknowledgment, Kafka transactions, and database transactions with PostgreSQL.

About the Project

This is a comprehensive demonstration of Kafka integration patterns in a microservices architecture. The project consists of multiple components working together to illustrate different approaches to message processing and transaction management.

Project Structure

├── docker/
│ └── docker-compose.yml # Multi-node Kafka cluster + PostgreSQL + springboot docker
│
├── springboot/
│ ├── core/ # Shared models and configurations
│ ├── producer/ # REST API and Kafka producer
│ ├── consumer-1/ # Manual acknowledgment consumer
│ ├── consumer-2/ # Kafka transaction consumer
│ ├── consumer-3/ # Database transaction consumer
│ └── pom.xml # Parent POM
├── test/
│ ├── send-task.sh # Script to send valid task
│ ├── send-error-task.sh # Script to test error handling
│ └── kafka-test-client.mjs # Direct Kafka producer script
└── README.md

Prerequisites

  • Docker and Docker Compose
  • Java 21(optional, for running locally)
  • Maven(optional, for running locally)
  • Node.js (optional, for running the Kafka test client)
  • curl (for testing REST endpoints)

Architecture Overview

The project is organized as a multi-module Maven project with the following components:

1. Core Module (springboot/core)

Shared library containing:

  • Common models and DTOs (TaskEvent, TaskEntity)
  • Kafka consumer configuration
  • Global exception handling
  • Transformers and utilities

2. Producer Module (springboot/producer)

A Spring Boot REST API service that:

  • Exposes a /api/task endpoint to receive task creation requests
  • Publishes task events to the Kafka tasks topic
  • Runs on port 8080

3. Consumer Modules

Three independent consumer services, each demonstrating a different message processing pattern:

Consumer-1: Manual Acknowledgment Pattern (springboot/consumer-1)

  • Group ID: manual
  • Pattern: Manual acknowledgment with Acknowledgment.acknowledge()
  • Use Case: Fine-grained control over when messages are marked as consumed
  • Behavior: Simulates random processing failures to demonstrate retry behavior
  • Port: 8081

Consumer-2: Kafka Transaction Pattern (springboot/consumer-2)

  • Group ID: ktx
  • Pattern: Kafka transactions using @Transactional(value = "kafkaTransactionManager")
  • Use Case: Atomic message processing with Kafka's transactional guarantees
  • Behavior: Simulates random processing failures to demonstrate transaction rollback
  • Port: 8082

Consumer-3: Database Transaction Pattern (springboot/consumer-3)

  • Group ID: database
  • Pattern: Database transactions using @Transactional(transactionManager = "postgresTransactionManager")
  • Use Case: Atomic message processing with database persistence
  • Behavior: Saves tasks to PostgreSQL and simulates random failures to demonstrate transaction rollback
  • Port: 8083
  • Database: PostgreSQL

4. Kafka Cluster (KRaft Mode)

A highly available, multi-node Kafka cluster configured with:

  • 3 Controller Nodes: Manage cluster metadata (KRaft mode - no Zookeeper required)
  • 3 Broker Nodes: Handle message storage and replication
    • Broker-1: localhost:9092
    • Broker-2: localhost:9094
    • Broker-3: localhost:9096
  • KRaft Mode: Uses Kafka's built-in consensus protocol (no Zookeeper dependency)
  • Replication Factor: 3 (each partition is replicated across 3 brokers)
  • Min In-Sync Replicas: 2 (at least 2 replicas must acknowledge writes)
  • Transaction State Log Replication: 3 (for transactional guarantees)

This configuration ensures:

  • The cluster can tolerate 1 broker failure without data loss
  • Strong durability guarantees for messages
  • High availability for both reads and writes

5. PostgreSQL Database

  • Used by Consumer-3 for persistent storage
  • Port: 5432
  • Database: default
  • User: apiuser

Getting Started

You can run the application using Docker or by running the components locally.

Running with Docker

This is the simplest way to get started. Docker Compose will start the entire Kafka cluster (3 controllers + 3 brokers), PostgreSQL, and all Spring Boot services.

  1. Start All Services: Navigate to the docker directory and run:

    cd docker
    docker-compose up -d

    This will start:

    • 3 Kafka controller nodes
    • 3 Kafka broker nodes(port 9092, 9094, 9096)
    • PostgreSQL database(port 5432)
    • Producer service (port 8080)
    • Consumer-1 service (port 8081)
    • Consumer-2 service (port 8082)
    • Consumer-3 service (port 8083)
  2. Verify Services: Check that all services are running:

    docker-compose ps

Usage

1. Send a Task via REST API

Use the send-task.sh script to send a POST request to the producer's /api/task endpoint. This script generates a random UUID for the taskId.

Run the script:

cd test
./send-task.sh

Expected Response:

{
  "status": 200,
  "message": "Task creation initiated",
  "data": {
    "taskId": "550e8400-e29b-41d4-a716-446655440000",
    "taskName": "Sample Task",
    "taskDescription": "This is a sample task"
  }
}

2. Monitor Logs

Each consumer processes the same message independently. You can monitor their logs to see different processing patterns:

Controller or Broker:

docker logs -f controller-1
docker logs -f controller-2
docker logs -f controller-3
docker logs -f broker-1
docker logs -f broker-2
docker logs -f broker-3

Producer/ Consumer:

docker logs -f producer
docker logs -f consumer-1
docker logs -f consumer-2
docker logs -f consumer-3

Note: Each consumer simulates random processing failures (50% chance) to demonstrate retry and transaction rollback behavior. You may see error logs followed by retry attempts.

3. Verify Database Persistence

For Consumer-3, you can verify that tasks are saved to PostgreSQL:

docker exec -it postgres psql -U apiuser -d default -c "SELECT * FROM tasks;"

4. Test Error Handling

The send-error-task.sh script sends a payload with an incorrect format to test validation and error handling:

cd test
./send-error-task.sh

5. Send Messages Directly to Kafka (Alternative)

You can use the kafka-test-client.mjs script to produce messages directly to the tasks Kafka topic, bypassing the REST API.

First, install the dependencies:

cd test
npm install

Then, run the script:

node kafka-test-client.mjs

Stopping the Services

To stop all Docker services:

cd docker
docker-compose down

To stop and remove all data (including Kafka topics and PostgreSQL data):

cd docker
docker-compose down -v

Troubleshooting

Monitor Services with Actuator Endpoints

All services (Producer and Consumers) expose Spring Boot Actuator endpoints for monitoring and management.

Kafka Properties Endpoint (Custom)

View effective Kafka consumer/producer configurations and runtime container state. This custom actuator endpoint provides detailed information about Kafka settings and listener containers.

# Producer - View producer and admin configurations
curl http://localhost:8080/monitor/kafkaProps

# Consumer-1 - View consumer configurations and listener state
curl http://localhost:8081/monitor/kafkaProps

# Consumer-2 - View consumer configurations with transaction manager
curl http://localhost:8082/monitor/kafkaProps

# Consumer-3 - View consumer configurations with database transaction manager
curl http://localhost:8083/monitor/kafkaProps

Checking Kafka Cluster Health

List all running Kafka brokers:

docker exec -it broker-1 bash
/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server broker-1:19092

Viewing Kafka Topics

List all topics:

/opt/kafka/bin/kafka-topics.sh --bootstrap-server broker-1:19092 --list

Describe the tasks topic:

/opt/kafka/bin/kafka-topics.sh --bootstrap-server broker-1:19092 --describe --topic tasks

Checking Consumer Groups

View consumer group status:

/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --list
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --describe --group manual
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --describe --group ktx
/opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server broker-1:19092 --describe --group database

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Author

About

No description, website, or topics provided.

Resources

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-DOCS.md

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published