Skip to content

MaximL1/homework

Repository files navigation

🎰 Jackpot Betting System

Spring Boot Java Kafka Docker H2 Database MapStruct

A modern, enterprise-grade jackpot betting system built with Spring Boot, Apache Kafka, and microservices architecture. The system allows users to place bets that contribute to jackpots and evaluate jackpot rewards with configurable contribution and reward mechanisms using Strategy Pattern and SOLID principles.

πŸ—οΈ Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   REST API      β”‚    β”‚   Kafka Queue   β”‚    β”‚   Database      β”‚
β”‚   Controllers   │───▢│  jackpot-bets   │───▢│   H2 Memory     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β–Ό                       β–Ό                       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Structured    β”‚    β”‚   Message       β”‚    β”‚   Jackpot       β”‚
β”‚   JSON DTOs     β”‚    β”‚   Processing    β”‚    β”‚   Strategies    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                       β”‚                       β”‚
         β–Ό                       β–Ό                       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Validation    β”‚    β”‚   Idempotency   β”‚    β”‚   Concurrency   β”‚
β”‚   Error Handlingβ”‚    β”‚   Protection    β”‚    β”‚   Protection    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

✨ Features

  • 🎯 Bet Publishing: REST API with structured JSON responses
  • πŸ”„ Async Processing: Kafka-based message processing with idempotency protection
  • πŸ’° Smart Contributions: Strategy pattern for configurable contribution types:
    • Fixed: Constant percentage contribution
    • Variable: Dynamic percentage that decreases as pool grows
  • 🎲 Intelligent Rewards: Strategy pattern for configurable reward chances:
    • Fixed: Constant win percentage
    • Variable: Dynamic percentage that increases with pool size
  • πŸ“Š Complete Audit Trail: Full tracking of bets, contributions, and rewards
  • πŸ›‘οΈ Enterprise Security: Request validation, global exception handling, concurrency protection
  • πŸ” Idempotency: Prevents duplicate processing from Kafka message retries
  • πŸ“– Swagger Documentation: Interactive API documentation
  • πŸ§ͺ Unit Testing: Comprehensive test coverage with mocking
  • 🐳 Production Ready: Complete containerization with Docker Compose

πŸš€ Quick Start

Prerequisites

  • Docker & Docker Compose (Required for containerized deployment)
  • Java 21 (Required for local development)
  • Gradle 8.x (Included via wrapper)

Option 1: Docker Deployment (Recommended)

# Clone the repository
git clone <repository-url>
cd homework

# Start all services with Docker Compose
docker-compose up -d --build

# Verify all services are running
docker-compose ps

# Check application logs
docker-compose logs homework-app -f

Option 2: Local Development Setup

Step 1: Start Infrastructure Services

# Start only Kafka and Zookeeper in Docker
docker-compose up kafka zookeeper kafka-ui -d

# Verify Kafka is running
docker-compose logs kafka

Step 2: Build and Run Application Locally

# Build the application
./gradlew clean build

# Run with local profile
./gradlew bootRun --args='--spring.profiles.active=local'

# Or run the JAR directly
java -jar build/libs/homework-0.0.1-SNAPSHOT.jar --spring.profiles.active=local

Step 3: Verify Local Setup

# Test application health
curl http://localhost:8080/actuator/health

# Access Swagger UI
open http://localhost:8080/swagger-ui/index.html

# Access H2 Console
open http://localhost:8080/h2-console

Local Development Configuration

Create src/main/resources/application-local.properties:

# Local development configuration
spring.application.name=homework-local

# H2 Database for local development
spring.datasource.url=jdbc:h2:mem:localdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.username=sa
spring.datasource.password=password
spring.h2.console.enabled=true
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=true

# Kafka configuration (connects to Docker Kafka)
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=jackpot-group-local
spring.kafka.consumer.auto-offset-reset=earliest

# Logging for development
logging.level.com.mytest.homework=DEBUG
logging.level.org.springframework.kafka=INFO

πŸ“‹ API Endpoints

🎯 Bet Management

POST /bets/publish

Publishes a new bet to the Kafka messaging system for processing.

Request:

{
  "betId": "123e4567-e89b-12d3-a456-426614174000",
  "userId": "987fcdeb-51a2-43d7-8f9e-123456789abc",
  "jackpotId": "456b7890-c123-4def-9012-345678901234",
  "amount": 50.00
}

Response:

{
  "betId": "123e4567-e89b-12d3-a456-426614174000",
  "success": true,
  "message": "Bet published successfully"
}

🎰 Jackpot Management

POST /jackpots/evaluate/{betId}

Evaluates if a specific bet wins the jackpot reward.

Response (Winner):

{
  "betId": "123e4567-e89b-12d3-a456-426614174000",
  "jackpotId": "456b7890-c123-4def-9012-345678901234",
  "won": true,
  "rewardAmount": 1200.50,
  "message": "Congratulations! You won the jackpot!"
}

Response (No Win):

{
  "betId": "123e4567-e89b-12d3-a456-426614174000",
  "jackpotId": "456b7890-c123-4def-9012-345678901234",
  "won": false,
  "rewardAmount": null,
  "message": "No jackpot reward for this bet"
}

πŸ§ͺ Testing

Quick Test with cURL

# 1. Publish a bet
curl -X POST http://localhost:8080/bets/publish \
  -H "Content-Type: application/json" \
  -d '{
    "betId": "123e4567-e89b-12d3-a456-426614174000",
    "userId": "987fcdeb-51a2-43d7-8f9e-123456789abc",
    "jackpotId": "456b7890-c123-4def-9012-345678901234",
    "amount": 100.00
  }'

# 2. Wait 2-3 seconds for Kafka processing

# 3. Evaluate jackpot
curl -X POST http://localhost:8080/jackpots/evaluate/123e4567-e89b-12d3-a456-426614174000

Postman Collection

Import the provided Postman collection: postman/Jackpot_Betting_System.postman_collection.json

  • Pre-configured requests with automatic UUID generation
  • Built-in test assertions
  • Variable management for bet tracking

Unit Testing

# Run all tests
./gradlew test

# Run specific test class
./gradlew test --tests BetServiceImplTest

# Run tests with coverage
./gradlew test jacocoTestReport

πŸ—„οΈ Database Schema

Pre-configured Jackpots for Testing

The system automatically creates two jackpots on startup:

  1. Default Jackpot - 456b7890-c123-4def-9012-345678901234

    • Win Chance: 15% (1 in 6-7 bets)
    • Initial Pool: $1000
    • Contribution: 10% of bet amount
  2. High-Frequency Jackpot - 789e1234-f567-89ab-cdef-012345678901

    • Win Chance: 25% (1 in 4 bets)
    • Initial Pool: $500
    • Contribution: 15% of bet amount

Database Tables

Core Tables

  • bets - All placed bets with timestamps
  • jackpots - Jackpot configurations and current pools
  • jackpot_contributions - Audit trail of all contributions
  • jackpot_rewards - Records of all jackpot wins

H2 Console Access

URL: http://localhost:8080/h2-console

Connection Settings:

  • JDBC URL: jdbc:h2:mem:testdb
  • Username: sa
  • Password: password

Useful Queries:

-- Check available jackpots
SELECT jackpot_id, reward_chance_percentage, current_pool_value FROM JACKPOTS;

-- View all bets
SELECT * FROM BETS ORDER BY CREATED_AT DESC;

-- Check contributions
SELECT bet_id, contribution_amount, current_jackpot_amount FROM JACKPOT_CONTRIBUTIONS ORDER BY CREATED_AT DESC;

-- See jackpot wins
SELECT * FROM JACKPOT_REWARDS ORDER BY CREATED_AT DESC;

βš™οΈ Configuration Profiles

Docker Profile (default)

Used when running in Docker containers

  • Connects to Kafka container
  • Uses Docker network for service discovery

Local Profile

Used for local development

  • Connects to Kafka running in Docker
  • Uses local H2 database
  • Enhanced logging for debugging

Test Profile

Used for unit testing

  • Disables Kafka autoconfiguration
  • Uses mocked beans
  • In-memory database

πŸ”§ Development Workflow

1. Setting Up Local Development

# Start infrastructure services
docker-compose up kafka zookeeper kafka-ui -d

# Verify Kafka is ready
curl http://localhost:8081 # Kafka UI

# Run application locally
./gradlew bootRun --args='--spring.profiles.active=local'

2. Making Changes

# Stop local application (Ctrl+C)
# Make your code changes
# Rebuild and restart
./gradlew clean build
./gradlew bootRun --args='--spring.profiles.active=local'

3. Testing Changes

# Run unit tests
./gradlew test

# Test API endpoints
curl -X POST http://localhost:8080/bets/publish -H "Content-Type: application/json" -d '...'

# Check database
# Visit: http://localhost:8080/h2-console

4. Docker Deployment

# Build and deploy with Docker
docker-compose down
docker-compose up -d --build

# Check logs
docker-compose logs homework-app -f

πŸ—οΈ Enterprise Architecture Features

Strategy Pattern Implementation

  • Contribution Strategies: Fixed vs Variable contribution calculations
  • Reward Strategies: Fixed vs Variable reward chance calculations
  • Factory Pattern: Automatic strategy selection based on jackpot configuration
  • Extensible: Add new strategies without modifying existing code

SOLID Principles Applied

  • Single Responsibility: Each class has one clear purpose
  • Open/Closed: Extensible via strategy pattern without modification
  • Liskov Substitution: All strategies implement common interfaces
  • Interface Segregation: Separate interfaces for different concerns
  • Dependency Inversion: Dependencies on abstractions, not concrete classes

Production-Ready Features

  • Idempotency: Prevents duplicate processing from Kafka retries
  • Concurrency Protection: Optimistic locking prevents race conditions
  • Structured APIs: JSON DTOs instead of plain text responses
  • Global Exception Handling: Centralized error management
  • Request Validation: Input validation with meaningful error messages
  • Comprehensive Logging: Detailed logging for monitoring and debugging

πŸ” Monitoring & Debugging

Application Health

# Health check
curl http://localhost:8080/actuator/health

# Application info
curl http://localhost:8080/actuator/info

Kafka Monitoring

# Kafka UI (when running with Docker)
open http://localhost:8081

# List topics
docker exec -it kafka kafka-topics --bootstrap-server localhost:9092 --list

# Check consumer groups
docker exec -it kafka kafka-consumer-groups --bootstrap-server localhost:9092 --list

Database Monitoring

Application Logs

# Docker deployment
docker-compose logs homework-app -f

# Local development
tail -f logs/application.log

# Filter specific components
docker-compose logs homework-app | grep "Processing bet"

🎯 Business Logic Flow

1. Bet Publishing Flow

Client β†’ BetController β†’ BetMessageProducer β†’ Kafka Topic "jackpot-bets"

2. Bet Processing Flow (Async)

Kafka Consumer β†’ BetService β†’ Jackpot Validation β†’ Strategy Selection β†’ 
Database Transaction (Bet + Contribution + Pool Update)

3. Reward Evaluation Flow

Client β†’ JackpotController β†’ JackpotService β†’ RewardEvaluator β†’ 
Strategy Calculation β†’ Random Evaluation β†’ Database Update (if winner)

πŸ› οΈ Local Development Tips

IDE Setup

  1. Import as Gradle project
  2. Enable annotation processing for Lombok and MapStruct
  3. Set Java 21 as project SDK
  4. Configure Spring Boot run configuration with local profile

Hot Reload Development

# Use Spring Boot DevTools for hot reload
./gradlew bootRun --continuous

# Or use IDE's built-in Spring Boot support
# Run HomeworkApplication.main() with VM options: -Dspring.profiles.active=local

Database Schema Evolution

# View generated DDL (when ddl-auto=create)
# Check logs for schema creation statements

# Export schema for documentation
# H2 Console β†’ Script β†’ Show β†’ Copy to clipboard

Kafka Message Debugging

# Monitor Kafka messages in real-time
docker exec -it kafka kafka-console-consumer \
  --bootstrap-server localhost:9092 \
  --topic jackpot-bets \
  --from-beginning

πŸ§ͺ Testing Strategies

Integration Testing Flow

  1. Start infrastructure: docker-compose up kafka zookeeper -d
  2. Run application: ./gradlew bootRun --args='--spring.profiles.active=local'
  3. Execute tests: Use Postman collection or cURL commands
  4. Verify results: Check H2 console and application logs

Load Testing

# Use the Postman collection with iterations
# Collection Runner β†’ Set iterations: 50 β†’ Set delay: 100ms

# Monitor performance
# Check response times in Postman
# Monitor application logs for performance metrics

Debugging Common Issues

Issue: Application won't start locally

# Check Java version
java --version

# Check if ports are available
lsof -i :8080
lsof -i :9092

# Start infrastructure first
docker-compose up kafka zookeeper -d

Issue: Kafka connection failures

# Verify Kafka is running
docker-compose logs kafka

# Check network connectivity
telnet localhost 9092

Issue: Database connection problems

# Check H2 console access
curl http://localhost:8080/h2-console

# Verify application.properties configuration
cat src/main/resources/application-local.properties

πŸ“ˆ Performance Optimization

Local Development Optimizations

  • Use local profile for faster startup
  • Disable unnecessary logging in production
  • Configure connection pools for high load
  • Monitor JVM metrics via actuator

Production Considerations

  • Scale Kafka partitions for higher throughput
  • Configure proper retention policies
  • Set up monitoring with Micrometer/Prometheus
  • Implement circuit breakers for external dependencies

🀝 Contributing

Development Workflow

  1. Fork the repository
  2. Create feature branch: git checkout -b feature/amazing-feature
  3. Set up local development environment as described above
  4. Make changes following SOLID principles and existing patterns
  5. Add unit tests for new functionality
  6. Test locally with the provided test scenarios
  7. Commit changes: git commit -m 'Add amazing feature'
  8. Push to branch: git push origin feature/amazing-feature
  9. Open Pull Request with detailed description

Code Standards

  • Follow SOLID principles and design patterns
  • Add unit tests for new features
  • Use strategy pattern for extensible algorithms
  • Implement proper error handling
  • Document API changes in Swagger annotations

πŸ“ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Built with ❀️ using Spring Boot, Apache Kafka, Strategy Pattern, and Enterprise Architecture Best Practices

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published