A REST API for managing team capacity planning across sprints, built with Python FastAPI.
workspace/
βββ app/
β βββ __init__.py
β βββ main.py # Main FastAPI application
β βββ models/
β β βββ __init__.py
β β βββ schemas.py # Data models (Sprint, TeamMember, etc.)
β βββ routes/
β β βββ __init__.py
β β βββ sprints.py # API endpoints
β βββ services/
β βββ __init__.py
β βββ capacity_service.py # Capacity calculation logic
β βββ database.py # In-memory data storage
βββ openapi.yaml # OpenAPI specification
βββ requirements.txt # Python dependencies
pip install -r requirements.txtpython -m app.mainOr using uvicorn directly:
uvicorn app.main:app --reloadThe server will start on: http://localhost:8000
FastAPI automatically generates interactive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- OpenAPI JSON: http://localhost:8000/openapi.json
GET /v1/sprints- Get all sprints (with optional date filters)POST /v1/sprints- Create a new sprintGET /v1/sprints/{sprintId}- Get a specific sprintPUT /v1/sprints/{sprintId}- Update a sprintDELETE /v1/sprints/{sprintId}- Delete a sprint
GET /v1/sprints/{sprintId}/capacity- Get capacity calculation for a sprint
curl -X POST "http://localhost:8000/v1/sprints" \
-H "Content-Type: application/json" \
-d '{
"sprintName": "Sprint 2025-01",
"sprintDuration": 14,
"startDate": "2025-01-06",
"endDate": "2025-01-19",
"teamMembers": [
{
"name": "John Doe",
"role": "Developer",
"confidencePercentage": 85.0,
"vacations": [
{
"startDate": "2025-01-10",
"endDate": "2025-01-12",
"reason": "Personal leave"
}
]
}
]
}'curl -X GET "http://localhost:8000/v1/sprints/{sprintId}/capacity"The capacity is calculated using the following logic:
- Working Days: Count weekdays between sprint start and end dates, excluding weekends (Sat/Sun)
- Available Days: For each team member, subtract their vacation days from total working days
- Adjusted Capacity: Apply confidence percentage:
availableDays * (confidencePercentage / 100) - Total Capacity: Sum all team members' adjusted capacity
The capacity calculation formula is in: app/services/capacity_service.py
To modify the formula, edit the calculate_capacity() function in that file.
This application uses PostgreSQL for persistent data storage. Data is preserved across server restarts. The database runs in a Docker container and uses SQLAlchemy ORM with async support.
For production, replace app/services/database.py with a real database:
- PostgreSQL (with SQLAlchemy)
- MongoDB (with Motor)
- SQLite (for local development)
This project includes comprehensive automated testing:
# Run all tests
pytest tests/ -v
# Run specific test suite
pytest tests/unit/ -v # Unit tests
pytest tests/contract/ -v # Contract tests
pytest tests/component/ -v # Component tests
pytest tests/functional/ -v # Functional tests
pytest tests/resiliency/ -v # Resiliency tests
# Run performance tests (requires server running)
python run_tests.py performance- Unit Tests: 28 tests - Capacity calculations, database operations
- Contract Tests: 12 tests - API schema validation
- Component Tests: 22 tests - Endpoint testing with various scenarios
- Functional Tests: 12 tests - End-to-end workflows
- Resiliency Tests: 20+ tests - Error handling, edge cases, concurrent operations
- Performance Tests: Load testing with Locust
Total: 94+ automated tests
See TESTING.md for detailed testing guide.
GitHub Actions workflow automatically runs on every push and PR:
- Lint - Code quality checks
- Unit Tests - Fast isolated tests
- Contract Tests - API contract validation
- Component Tests - Endpoint testing
- Functional Tests - E2E workflows
- Resiliency Tests - Error scenarios
- Performance Tests - Load testing (main branch)
- Build - Docker image creation
- Deploy - Ready for deployment
The API includes comprehensive monitoring with structured logging and Prometheus metrics.
- Structured JSON Logging: All logs in JSON format with correlation IDs
- Request Tracing: Automatic X-Request-ID generation and propagation
- Prometheus Metrics: HTTP requests, business operations, errors
- Health Checks: Multiple endpoints for different use cases
- Grafana Dashboards: Pre-built visualization dashboards
GET /health- Basic health statusGET /health/detailed- Detailed system metrics (CPU, memory, disk)GET /health/ready- Readiness check (for Kubernetes)GET /health/live- Liveness check (for Kubernetes)GET /metrics- Prometheus metrics endpoint
HTTP Metrics:
http_requests_total- Total HTTP requests by method, endpoint, statushttp_request_duration_seconds- Request duration histogramhttp_requests_in_progress- Currently processing requests
Business Metrics:
sprints_created_total- Total sprints createdsprints_updated_total- Total sprints updatedsprints_deleted_total- Total sprints deletedsprint_capacity_calculations_total- Total capacity calculations
Error Metrics:
errors_total- Total errors by type and endpointvalidation_errors_total- Validation errors by field
System Metrics:
active_sprints- Number of active sprintsdatabase_connections- Active database connections
Start the full monitoring stack (API + Prometheus + Grafana):
docker-compose up -dAccess the services:
- API: http://localhost:8000
- API Docs: http://localhost:8000/docs
- Metrics: http://localhost:8000/metrics
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin)
- Login to Grafana at http://localhost:3000
- Default credentials:
admin/admin - Dashboard is auto-provisioned: "Sprint Capacity API - Overview"
- Prometheus datasource is pre-configured
The application outputs structured JSON logs:
{
"timestamp": "2025-01-09T10:30:45.123Z",
"level": "INFO",
"logger": "app.routes.sprints",
"message": "Request completed: POST /v1/sprints",
"request_id": "550e8400-e29b-41d4-a716-446655440000",
"method": "POST",
"endpoint": "/v1/sprints",
"status_code": 201,
"duration_ms": 45.23
}Filter logs by request ID to trace a request through the system.
docker build -t sprint-capacity-api .docker run -d -p 8000:8000 --name sprint-api sprint-capacity-apicurl http://localhost:8000/healthThe API publishes domain events to Kafka for event-driven integration:
sprint.lifecycle- Sprint creation and deletion eventssprint.team-members- Team member addition and update events
# Start Kafka
docker-compose up zookeeper kafka
# Run event consumer (in another terminal)
python examples/simple_consumer.py
# Start API
uvicorn app.main:app --reloadSee Kafka Events Documentation for:
- Event schemas
- Consumer examples (Python, Node.js)
- Integration patterns
- Production setup
Use Cases:
- π§ Send email notifications to team members
- π Update analytics dashboards in real-time
- π Sync with external tools (JIRA, Slack, etc.)
- π Maintain audit trails and event sourcing
- β API specification created (OpenAPI 3.0)
- β Server code implemented (FastAPI + Python)
- β Frontend application (React)
- β Comprehensive test suite (94+ tests)
- β CI/CD pipeline (GitHub Actions)
- β Docker containerization
- β Monitoring and observability (Prometheus + Grafana)
- β Event-driven architecture (Kafka integration)
- β³ Production deployment (pending)
- β³ Blue/green deployment (paused)