A real-time multiplayer wizard battle game with Redis-backed multi-instance architecture supporting horizontal scaling and cross-instance matchmaking.
- Horizontal Scaling: Run multiple server instances simultaneously
- Redis State Management: All game state persisted in Redis
- Cross-Instance Communication: Seamless communication between instances
- Socket-to-Instance Mapping: Track connections across all instances
- Health Monitoring: Comprehensive system monitoring and statistics
- Cross-Instance Matchmaking: Players matched across different server instances
- WebSocket Communication: Real-time game updates
- Game State Persistence: Complete game state stored in Redis
- Graceful Disconnection: Proper cleanup and resource management
- GameSessionGateway: WebSocket handling and game session management
- MatchmakingService: Player matching with Redis-backed queues
- GameStateService: Game state persistence and cross-instance communication
- RedisHealthService: System monitoring and health checks
socket_mappings: Hash storing socket-to-instance mappings
game_states: Hash storing game state for each room
waiting:level:${level}: Lists of players waiting for matches
matches: Hash storing active match information
room_events: Pub/sub channel for cross-instance communication
- Backend: NestJS with TypeScript
- WebSockets: Socket.IO with Redis adapter
- State Management: Redis for persistence and synchronization
- Frontend: Next.js with React
- Build System: Turborepo for monorepo management
PostHog analytics is integrated for tracking user behavior and game metrics.
Documentation: docs/posthog/
Status: β 34 events, 91% requirements implemented (20/22)
wizard-battle/
βββ apps/
β βββ backend/ # NestJS game server
β βββ frontend/ # Next.js game client
β βββ common/ # Shared types and utilities
βββ packages/
β βββ redis/ # Redis configuration
β βββ typescript-config/ # TypeScript configurations
βββ docs/ # Documentation
- Navigate to your GitHub repository
- Go to Settings > Secrets and Variables > Actions
- Click "New repository secret"
- Add the following required secrets:
MONGODB_URI: MongoDB connection stringMONGODB_DB: MongoDB database nameSERVER_HOST: Remote server hostname/IPSERVER_USER: SSH username for remote serverSERVER_PORT: SSH port for remote serverSERVER_SSH_KEY: SSH private key for authenticationTARGET_PATH: Remote server deployment pathTELEGRAM_TOKEN: Telegram bot token for notificationsTELEGRAM_CHAT_ID: Telegram chat ID for notificationsMONGO_INITDB_ROOT_USERNAME: MongoDB root username (default: admin)MONGO_INITDB_ROOT_PASSWORD: MongoDB root passwordPOSTGRES_USER: PostgreSQL username (default: orbitrium)POSTGRES_PASSWORD: PostgreSQL passwordPOSTGRES_DB: PostgreSQL database name (default: orbitrium_db)
for dev server
git commit --allow-empty -m "Deploy all to new dev server"
git push origin devfor production server
git commit --allow-empty -m "Deploy all to new prod server"
git push origin main- Node.js 18+
- Redis server
- pnpm package manager
pnpm install# Using Docker
docker run -d -p 6379:6379 redis:latest
# Or using local Redis
redis-servercd apps/backend
# Start multiple instances
npm run start:multi
# Or start instances only (keeps them running)
npm run start:instancescd apps/frontend
npm run devcd apps/backend
npm run test:single-instancecd apps/backend
npm run test:multi-instance# Start instances manually
export APP_PORT=3001 && npm run start:dev &
export APP_PORT=3002 && npm run start:dev &
export APP_PORT=3003 && npm run start:dev &
# Test health endpoints
curl http://localhost:3001/health
curl http://localhost:3002/health
curl http://localhost:3003/healthAPP_PORT: Server port (default: 3030)REDIS_URL: Redis connection URL (default: redis://localhost:6379)
The system automatically supports multiple instances:
- Each instance gets a unique instance ID
- Socket mappings track which instance each connection belongs to
- Redis pub/sub enables cross-instance communication
- Game state is shared across all instances
GET /health: Overall system healthGET /health/stats: Detailed system statisticsPOST /health/cleanup: Clean up orphaned data
{
"redis": true,
"matchmaking": true,
"gameStates": true,
"socketMappings": true,
"details": {
"redisConnection": true,
"matchmakingData": 5,
"activeGameStates": 3,
"activeSocketMappings": 10,
"activeRooms": 3
}
}- Player A connects to Instance 1
- Player B connects to Instance 2
- Both players join matchmaking queue (stored in Redis)
- MatchmakingService finds compatible players across instances
- Game state created in Redis
- Both players notified via cross-instance events
- Game session starts with players on different instances
matchFound: Notifies players when match is createdplayerJoined: Handles player joining from different instancegameMessage: Broadcasts game messages across instancesopponentDisconnected: Handles player disconnection
version: '3.8'
services:
redis:
image: redis:latest
ports:
- "6379:6379"
backend-1:
build: ./apps/backend
environment:
- APP_PORT=3001
- REDIS_URL=redis://redis:6379
ports:
- "3001:3001"
backend-2:
build: ./apps/backend
environment:
- APP_PORT=3002
- REDIS_URL=redis://redis:6379
ports:
- "3002:3002"
backend-3:
build: ./apps/backend
environment:
- APP_PORT=3003
- REDIS_URL=redis://redis:6379
ports:
- "3003:3003"- Use sticky sessions for WebSocket connections
- Configure health checks for all instances
- Set up Redis clustering for high availability
-
Port Conflicts
# Check for running processes lsof -i :3001 -i :3002 -i :3003 # Kill existing processes pkill -f "nest start"
-
Redis Connection Issues
# Test Redis connection redis-cli ping # Check Redis logs docker logs redis-container
-
Cross-Instance Communication
- Verify Redis pub/sub is working
- Check socket mappings in Redis
- Monitor room events
# Check Redis data
redis-cli hgetall socket_mappings
redis-cli hgetall game_states
redis-cli hgetall matches
# Monitor Redis operations
redis-cli monitor
# Check instance health
curl http://localhost:3001/health
curl http://localhost:3002/health
curl http://localhost:3003/health- Use Redis pipelining for batch operations
- Implement connection pooling
- Monitor Redis memory usage
- Consider Redis clustering for high availability
- Start with 3-5 instances for testing
- Monitor memory usage per instance
- Use load balancer for distribution
- Implement auto-scaling based on metrics
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
This project is licensed under the MIT License.