This project is a high-performance, full-stack Task Management application built with a focus on Scalability, Clean Architecture, and Observability. It demonstrates a production-ready approach to building systems using FastAPI, React, and modern DevOps practices.
The project follows a Layered Architecture (Service-Repository Pattern) to ensure a strict separation of concerns, making the codebase highly testable and maintainable:
- API Layer (/api/v1): RESTful endpoints with full versioning support.
- Service Layer: Orchestrates business logic and handles cross-cutting concerns like triggering background audit logs.
- Repository Layer: Abstracted data access using SQLAlchemy 2.0 Async, decoupling business logic from the ORM.
- Models & Schemas: Pydantic v2 for strict data validation and Type-safe database models.
- Frontend (React + AntD): A modular, feature-based UI using React Query for efficient server-state management and caching.
- Asyncpg & SQLAlchemy 2.0: The entire database layer is non-blocking. Using the
asyncpgdialect allows the system to handle a high volume of concurrent connections with minimal I/O overhead. - Asynchronous Background Tasks: Audit logging is handled via FastAPI's BackgroundTasks. This ensures that logging operations do not block the main request-response cycle, maintaining low latency for the end user.
- Multi-stage Docker Builds: Optimized Dockerfiles separate the build-time environment from the final runtime image, significantly reducing image size and improving security.
- Caddy Server: For the production frontend, Caddy was chosen over Nginx. It provides modern defaults, automatic HTTPS readiness, and handles SPA routing natively and efficiently.
- Staging & Environments: Separate Docker Compose files are provided for Local Development (with hot-reload) and Production (with Caddy & optimized builds).
- Structured Logging: Implemented with structlog to produce machine-readable JSON logs, ready for indexing by ELK or Grafana.
- Audit Trails: Every state transition (Create, Update, Delete) triggers an asynchronous audit event to track user actions for compliance and debugging.
Create a .env file in the root directory. Example configuration:
PROJECT_NAME="Task Management API"
API_V1_STR="/api/v1"
POSTGRES_USER=admin
POSTGRES_PASSWORD=super_secure_password_123!
POSTGRES_DB=taskdb
POSTGRES_PORT=5432
POSTGRES_SERVER=db # Use 'db' for Docker, 'localhost' for local execution
SECRET_KEY=09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7
VITE_API_BASE_URL=http://localhost:8000/api/v1docker-compose up --build- Frontend: http://localhost:5173
- Backend Swagger: http://localhost:8000/docs
docker-compose -f docker-compose.prod.yml up --build- Frontend: http://localhost:8080
The system includes comprehensive integration tests using a production-like flow.
- Tool:
pytestwithpytest-asynciofor non-blocking test execution. - Coverage: * Full CRUD lifecycle of tasks.
- Authentication guardrails (Unauthorized access attempts).
- Advanced Filtering & Pagination: Verifying server-side logic for limit/offset and status-based filtering.
- Database: Uses
aiosqlite(In-memory) to ensure tests are isolated, side-effect free, and extremely fast.
To run tests:
docker-compose exec backend pytestIn a real-world enterprise production system, I would implement:
- Distributed Task Queues: Migrate from BackgroundTasks to Celery + Redis for complex processing and persistent execution.
- Distributed Caching: Integrate Redis to cache frequently accessed task lists and user sessions, reducing DB load.
- Advanced Observability: Implement OpenTelemetry for distributed tracing and Prometheus/Grafana for real-time monitoring.
- Infrastructure as Code (IaC): Utilize Terraform or AWS CDK for provisioning cloud resources (RDS, ECS, VPC).
- CI/CD Pipelines: Set up GitHub Actions to automate Linting (Flake8), Formatting (Black), and automated deployments to AWS/GCP.
- User authentication is handled via stateful JWT tokens.
- Task status follows a logical transition flow: TODO -> IN PROGRESS -> DONE.
- The system is designed to be horizontally scalable by spinning up multiple backend container instances behind a load balancer.