A modern, enterprise-grade jackpot betting system built with Spring Boot, Apache Kafka, and microservices architecture. The system allows users to place bets that contribute to jackpots and evaluate jackpot rewards with configurable contribution and reward mechanisms using Strategy Pattern and SOLID principles.
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β REST API β β Kafka Queue β β Database β
β Controllers βββββΆβ jackpot-bets βββββΆβ H2 Memory β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Structured β β Message β β Jackpot β
β JSON DTOs β β Processing β β Strategies β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Validation β β Idempotency β β Concurrency β
β Error Handlingβ β Protection β β Protection β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
- π― Bet Publishing: REST API with structured JSON responses
- π Async Processing: Kafka-based message processing with idempotency protection
- π° Smart Contributions: Strategy pattern for configurable contribution types:
- Fixed: Constant percentage contribution
- Variable: Dynamic percentage that decreases as pool grows
- π² Intelligent Rewards: Strategy pattern for configurable reward chances:
- Fixed: Constant win percentage
- Variable: Dynamic percentage that increases with pool size
- π Complete Audit Trail: Full tracking of bets, contributions, and rewards
- π‘οΈ Enterprise Security: Request validation, global exception handling, concurrency protection
- π Idempotency: Prevents duplicate processing from Kafka message retries
- π Swagger Documentation: Interactive API documentation
- π§ͺ Unit Testing: Comprehensive test coverage with mocking
- π³ Production Ready: Complete containerization with Docker Compose
- Docker & Docker Compose (Required for containerized deployment)
- Java 21 (Required for local development)
- Gradle 8.x (Included via wrapper)
# Clone the repository
git clone <repository-url>
cd homework
# Start all services with Docker Compose
docker-compose up -d --build
# Verify all services are running
docker-compose ps
# Check application logs
docker-compose logs homework-app -f
# Start only Kafka and Zookeeper in Docker
docker-compose up kafka zookeeper kafka-ui -d
# Verify Kafka is running
docker-compose logs kafka
# Build the application
./gradlew clean build
# Run with local profile
./gradlew bootRun --args='--spring.profiles.active=local'
# Or run the JAR directly
java -jar build/libs/homework-0.0.1-SNAPSHOT.jar --spring.profiles.active=local
# Test application health
curl http://localhost:8080/actuator/health
# Access Swagger UI
open http://localhost:8080/swagger-ui/index.html
# Access H2 Console
open http://localhost:8080/h2-console
Create src/main/resources/application-local.properties
:
# Local development configuration
spring.application.name=homework-local
# H2 Database for local development
spring.datasource.url=jdbc:h2:mem:localdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.username=sa
spring.datasource.password=password
spring.h2.console.enabled=true
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=true
# Kafka configuration (connects to Docker Kafka)
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=jackpot-group-local
spring.kafka.consumer.auto-offset-reset=earliest
# Logging for development
logging.level.com.mytest.homework=DEBUG
logging.level.org.springframework.kafka=INFO
Publishes a new bet to the Kafka messaging system for processing.
Request:
{
"betId": "123e4567-e89b-12d3-a456-426614174000",
"userId": "987fcdeb-51a2-43d7-8f9e-123456789abc",
"jackpotId": "456b7890-c123-4def-9012-345678901234",
"amount": 50.00
}
Response:
{
"betId": "123e4567-e89b-12d3-a456-426614174000",
"success": true,
"message": "Bet published successfully"
}
Evaluates if a specific bet wins the jackpot reward.
Response (Winner):
{
"betId": "123e4567-e89b-12d3-a456-426614174000",
"jackpotId": "456b7890-c123-4def-9012-345678901234",
"won": true,
"rewardAmount": 1200.50,
"message": "Congratulations! You won the jackpot!"
}
Response (No Win):
{
"betId": "123e4567-e89b-12d3-a456-426614174000",
"jackpotId": "456b7890-c123-4def-9012-345678901234",
"won": false,
"rewardAmount": null,
"message": "No jackpot reward for this bet"
}
# 1. Publish a bet
curl -X POST http://localhost:8080/bets/publish \
-H "Content-Type: application/json" \
-d '{
"betId": "123e4567-e89b-12d3-a456-426614174000",
"userId": "987fcdeb-51a2-43d7-8f9e-123456789abc",
"jackpotId": "456b7890-c123-4def-9012-345678901234",
"amount": 100.00
}'
# 2. Wait 2-3 seconds for Kafka processing
# 3. Evaluate jackpot
curl -X POST http://localhost:8080/jackpots/evaluate/123e4567-e89b-12d3-a456-426614174000
Import the provided Postman collection: postman/Jackpot_Betting_System.postman_collection.json
- Pre-configured requests with automatic UUID generation
- Built-in test assertions
- Variable management for bet tracking
# Run all tests
./gradlew test
# Run specific test class
./gradlew test --tests BetServiceImplTest
# Run tests with coverage
./gradlew test jacocoTestReport
The system automatically creates two jackpots on startup:
-
Default Jackpot -
456b7890-c123-4def-9012-345678901234
- Win Chance: 15% (1 in 6-7 bets)
- Initial Pool: $1000
- Contribution: 10% of bet amount
-
High-Frequency Jackpot -
789e1234-f567-89ab-cdef-012345678901
- Win Chance: 25% (1 in 4 bets)
- Initial Pool: $500
- Contribution: 15% of bet amount
bets
- All placed bets with timestampsjackpots
- Jackpot configurations and current poolsjackpot_contributions
- Audit trail of all contributionsjackpot_rewards
- Records of all jackpot wins
URL: http://localhost:8080/h2-console
Connection Settings:
- JDBC URL:
jdbc:h2:mem:testdb
- Username:
sa
- Password:
password
Useful Queries:
-- Check available jackpots
SELECT jackpot_id, reward_chance_percentage, current_pool_value FROM JACKPOTS;
-- View all bets
SELECT * FROM BETS ORDER BY CREATED_AT DESC;
-- Check contributions
SELECT bet_id, contribution_amount, current_jackpot_amount FROM JACKPOT_CONTRIBUTIONS ORDER BY CREATED_AT DESC;
-- See jackpot wins
SELECT * FROM JACKPOT_REWARDS ORDER BY CREATED_AT DESC;
Used when running in Docker containers
- Connects to Kafka container
- Uses Docker network for service discovery
Used for local development
- Connects to Kafka running in Docker
- Uses local H2 database
- Enhanced logging for debugging
Used for unit testing
- Disables Kafka autoconfiguration
- Uses mocked beans
- In-memory database
# Start infrastructure services
docker-compose up kafka zookeeper kafka-ui -d
# Verify Kafka is ready
curl http://localhost:8081 # Kafka UI
# Run application locally
./gradlew bootRun --args='--spring.profiles.active=local'
# Stop local application (Ctrl+C)
# Make your code changes
# Rebuild and restart
./gradlew clean build
./gradlew bootRun --args='--spring.profiles.active=local'
# Run unit tests
./gradlew test
# Test API endpoints
curl -X POST http://localhost:8080/bets/publish -H "Content-Type: application/json" -d '...'
# Check database
# Visit: http://localhost:8080/h2-console
# Build and deploy with Docker
docker-compose down
docker-compose up -d --build
# Check logs
docker-compose logs homework-app -f
- Contribution Strategies: Fixed vs Variable contribution calculations
- Reward Strategies: Fixed vs Variable reward chance calculations
- Factory Pattern: Automatic strategy selection based on jackpot configuration
- Extensible: Add new strategies without modifying existing code
- Single Responsibility: Each class has one clear purpose
- Open/Closed: Extensible via strategy pattern without modification
- Liskov Substitution: All strategies implement common interfaces
- Interface Segregation: Separate interfaces for different concerns
- Dependency Inversion: Dependencies on abstractions, not concrete classes
- Idempotency: Prevents duplicate processing from Kafka retries
- Concurrency Protection: Optimistic locking prevents race conditions
- Structured APIs: JSON DTOs instead of plain text responses
- Global Exception Handling: Centralized error management
- Request Validation: Input validation with meaningful error messages
- Comprehensive Logging: Detailed logging for monitoring and debugging
# Health check
curl http://localhost:8080/actuator/health
# Application info
curl http://localhost:8080/actuator/info
# Kafka UI (when running with Docker)
open http://localhost:8081
# List topics
docker exec -it kafka kafka-topics --bootstrap-server localhost:9092 --list
# Check consumer groups
docker exec -it kafka kafka-consumer-groups --bootstrap-server localhost:9092 --list
- H2 Console: http://localhost:8080/h2-console
- Real-time SQL Logging: Enabled in local profile
- Connection Pool Metrics: Available via actuator endpoints
# Docker deployment
docker-compose logs homework-app -f
# Local development
tail -f logs/application.log
# Filter specific components
docker-compose logs homework-app | grep "Processing bet"
Client β BetController β BetMessageProducer β Kafka Topic "jackpot-bets"
Kafka Consumer β BetService β Jackpot Validation β Strategy Selection β
Database Transaction (Bet + Contribution + Pool Update)
Client β JackpotController β JackpotService β RewardEvaluator β
Strategy Calculation β Random Evaluation β Database Update (if winner)
- Import as Gradle project
- Enable annotation processing for Lombok and MapStruct
- Set Java 21 as project SDK
- Configure Spring Boot run configuration with
local
profile
# Use Spring Boot DevTools for hot reload
./gradlew bootRun --continuous
# Or use IDE's built-in Spring Boot support
# Run HomeworkApplication.main() with VM options: -Dspring.profiles.active=local
# View generated DDL (when ddl-auto=create)
# Check logs for schema creation statements
# Export schema for documentation
# H2 Console β Script β Show β Copy to clipboard
# Monitor Kafka messages in real-time
docker exec -it kafka kafka-console-consumer \
--bootstrap-server localhost:9092 \
--topic jackpot-bets \
--from-beginning
- Start infrastructure:
docker-compose up kafka zookeeper -d
- Run application:
./gradlew bootRun --args='--spring.profiles.active=local'
- Execute tests: Use Postman collection or cURL commands
- Verify results: Check H2 console and application logs
# Use the Postman collection with iterations
# Collection Runner β Set iterations: 50 β Set delay: 100ms
# Monitor performance
# Check response times in Postman
# Monitor application logs for performance metrics
# Check Java version
java --version
# Check if ports are available
lsof -i :8080
lsof -i :9092
# Start infrastructure first
docker-compose up kafka zookeeper -d
# Verify Kafka is running
docker-compose logs kafka
# Check network connectivity
telnet localhost 9092
# Check H2 console access
curl http://localhost:8080/h2-console
# Verify application.properties configuration
cat src/main/resources/application-local.properties
- Use local profile for faster startup
- Disable unnecessary logging in production
- Configure connection pools for high load
- Monitor JVM metrics via actuator
- Scale Kafka partitions for higher throughput
- Configure proper retention policies
- Set up monitoring with Micrometer/Prometheus
- Implement circuit breakers for external dependencies
- Fork the repository
- Create feature branch:
git checkout -b feature/amazing-feature
- Set up local development environment as described above
- Make changes following SOLID principles and existing patterns
- Add unit tests for new functionality
- Test locally with the provided test scenarios
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open Pull Request with detailed description
- Follow SOLID principles and design patterns
- Add unit tests for new features
- Use strategy pattern for extensible algorithms
- Implement proper error handling
- Document API changes in Swagger annotations
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Built with β€οΈ using Spring Boot, Apache Kafka, Strategy Pattern, and Enterprise Architecture Best Practices