A comprehensive FastAPI-based logistics platform with pickup points, similar to DHL Packstations or UPS Access Points.
- User Management: Registration, authentication, and role-based access control
- Pickup Points: CRUD operations with geolocation support
- Shipment Management: Create, track, and manage shipments
- Pricing Engine: Dynamic pricing based on distance and weight
- Payment Integration: Stripe payment processing
- Tracking System: Real-time shipment tracking with status updates
- Notifications: Email and SMS notifications
- Admin Dashboard: Comprehensive reporting and analytics
- RESTful API: Full OpenAPI/Swagger documentation
- FastAPI: Modern, fast web framework for building APIs
- SQLAlchemy: SQL toolkit and ORM
- PostgreSQL: Primary database
- Alembic: Database migration tool
- JWT: JSON Web Tokens for authentication
- Stripe: Payment processing
- SendGrid: Email notifications
- Twilio: SMS notifications
- Docker: Containerization
backend/
βββ app/
β βββ auth/ # Authentication utilities
β βββ crud/ # Database CRUD operations
β βββ models/ # SQLAlchemy models
β βββ routers/ # API route handlers
β βββ schemas/ # Pydantic schemas
β βββ services/ # Business logic services
β βββ utils/ # Utility functions
β βββ config.py # Application configuration
β βββ database.py # Database connection
β βββ dependencies.py # FastAPI dependencies
β βββ main.py # Application entry point
βββ alembic/ # Database migrations
βββ tests/ # Test files
βββ requirements.txt # Python dependencies
- Python 3.11+
- PostgreSQL 14+
- Redis (optional, for caching)
-
Clone the repository:
git clone <repository-url> cd logistics-platform/backend
-
Create virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables:
cp .env.example .env # Edit .env with your configuration
-
Set up database:
# Create PostgreSQL database createdb logistics_db # Run migrations alembic upgrade head
-
Run the application:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
The API will be available at http://localhost:8000
- Interactive API docs:
http://localhost:8000/docs
- Alternative docs:
http://localhost:8000/redoc
Variable | Description | Default |
---|---|---|
DATABASE_URL |
PostgreSQL connection string | postgresql://postgres:password@localhost/logistics_db |
SECRET_KEY |
JWT secret key | your-secret-key-change-in-production |
STRIPE_SECRET_KEY |
Stripe secret key | - |
SENDGRID_API_KEY |
SendGrid API key | - |
TWILIO_ACCOUNT_SID |
Twilio account SID | - |
TWILIO_AUTH_TOKEN |
Twilio auth token | - |
-
Install PostgreSQL with PostGIS (for geolocation features):
# Ubuntu/Debian sudo apt-get install postgresql postgresql-contrib postgis # macOS brew install postgresql postgis
-
Create database and enable PostGIS:
CREATE DATABASE logistics_db; \c logistics_db; CREATE EXTENSION postgis;
-
Run migrations:
alembic upgrade head
POST /api/v1/auth/register
- Register new userPOST /api/v1/auth/login
- User loginPOST /api/v1/auth/token
- OAuth2 token endpoint
GET /api/v1/users/me
- Get current userPUT /api/v1/users/me
- Update current userGET /api/v1/users/
- List all users (admin)
GET /api/v1/pickup-points/
- List pickup pointsPOST /api/v1/pickup-points/search
- Search by locationGET /api/v1/pickup-points/{id}
- Get pickup pointPOST /api/v1/pickup-points/
- Create pickup point (admin)
POST /api/v1/shipments/calculate-price
- Calculate pricePOST /api/v1/shipments/
- Create shipmentGET /api/v1/shipments/my
- Get user's shipmentsGET /api/v1/shipments/{id}
- Get shipment details
GET /api/v1/tracking/{tracking_number}
- Track shipmentPOST /api/v1/tracking/{tracking_number}/update
- Update status (admin)
POST /api/v1/payments/intent
- Create payment intentPOST /api/v1/payments/
- Create paymentGET /api/v1/payments/my
- Get user's payments
GET /api/v1/reports/shipments
- Shipment statisticsGET /api/v1/reports/revenue
- Revenue statisticsGET /api/v1/reports/dashboard
- Dashboard data
Run the test suite:
pytest
Run with coverage:
pytest --cov=app --cov-report=html
docker build -t logistics-api .
docker run -p 8000:8000 logistics-api
Create docker-compose.yml
:
version: '3.8'
services:
db:
image: postgis/postgis:14-3.2
environment:
POSTGRES_DB: logistics_db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
api:
build: .
ports:
- "8000:8000"
environment:
DATABASE_URL: postgresql://postgres:password@db/logistics_db
depends_on:
- db
volumes:
postgres_data:
Run with:
docker-compose up
-
Build the image:
docker build -t logistics-api .
-
Run with environment variables:
docker run -p 8000:8000 \ -e DATABASE_URL=your-database-url \ -e SECRET_KEY=your-secret-key \ logistics-api
- Install dependencies on server
- Set up PostgreSQL database
- Configure environment variables
- Run migrations:
alembic upgrade head
- Start with a WSGI server like Gunicorn:
gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker
- JWT-based authentication
- Password hashing with bcrypt
- Role-based access control (RBAC)
- CORS configuration
- SQL injection protection via SQLAlchemy ORM
- Input validation with Pydantic
- Secure SECRET_KEY: Generate cryptographically secure JWT secret key
- Environment-specific configs: Separate dev/staging/prod configurations
- CORS configuration: Update ALLOWED_HOSTS to specific domains only
- HTTPS enforcement: Force HTTPS in production
- Database connection pooling: Configure SQLAlchemy connection pool
- Environment validation: Validate all required environment variables on startup
- Secrets management: Use external secret management (AWS Secrets Manager, HashiCorp Vault)
- Rate limiting: Implement API rate limiting with Redis/memory store
- Request size limits: Set maximum request payload size
- Input validation middleware: Add comprehensive input sanitization
- SQL injection prevention: Review and audit all raw SQL queries
- XSS protection: Add security headers middleware
- CSRF protection: Implement CSRF tokens for state-changing operations
- API key authentication: Add API key auth for service-to-service calls
- Password policies: Enforce strong password requirements
- Account lockout: Implement failed login attempt limits
- Session management: Add proper session invalidation
- Structured logging: Implement JSON-structured logging with correlation IDs
- Application metrics: Add Prometheus/OpenTelemetry metrics
- Health checks: Enhanced health checks (database, external services)
- Error tracking: Integrate Sentry or similar error tracking service
- Performance monitoring: Add APM tools (New Relic, DataDog, etc.)
- Log aggregation: Set up centralized logging (ELK stack, CloudWatch)
- Alerting: Configure alerts for critical errors and performance issues
- Database monitoring: Monitor query performance and connection pool
- Business metrics: Track key business metrics (shipments, revenue, etc.)
- Database indexing: Optimize database indexes for common queries
- Query optimization: Review and optimize slow database queries
- Caching layer: Implement Redis caching for frequently requested data
- Connection pooling: Configure optimal database connection pools
- Async operations: Convert blocking operations to async where possible
- Background tasks: Implement Celery/RQ for background job processing
- File upload optimization: Optimize file handling and storage
- Database migrations: Test migration scripts with large datasets
- Load testing: Perform load testing with realistic traffic patterns
- Unit tests: Achieve >80% code coverage with unit tests
- Integration tests: Test API endpoints with database interactions
- End-to-end tests: Automated testing of complete user workflows
- Load testing: Performance testing under expected load
- Security testing: OWASP security testing and vulnerability scanning
- Database testing: Test migration scripts and data integrity
- Error scenario testing: Test error handling and recovery
- API contract testing: Ensure API contracts are maintained
- Dependency security: Regular dependency vulnerability scanning
- Production Dockerfile: Multi-stage Dockerfile optimized for production
- Docker security: Non-root user, minimal base image, security scanning
- Container orchestration: Kubernetes/Docker Swarm deployment configs
- Load balancer: Configure load balancer with health checks
- Auto-scaling: Implement horizontal pod/container auto-scaling
- Database backups: Automated database backup and recovery procedures
- Blue/green deployment: Zero-downtime deployment strategy
- Infrastructure as Code: Terraform/CloudFormation for infrastructure
- SSL/TLS certificates: Automated certificate management
- Reverse proxy: Configure Nginx/Traefik reverse proxy
- Documentation: Complete API documentation and deployment guides
- Runbooks: Operational runbooks for common issues and procedures
- Disaster recovery: Document and test disaster recovery procedures
- Compliance: Ensure GDPR/CCPA compliance for user data
- Audit logging: Log all administrative and sensitive operations
- Data retention: Implement data retention and cleanup policies
- Backup verification: Regular backup restoration testing
- Change management: Formal change management process
- Incident response: Incident response plan and escalation procedures
- Feature flags: Implement feature flag system for gradual rollouts
- API versioning: Proper API versioning strategy
- Request tracing: Distributed tracing for microservices
- Circuit breakers: Implement circuit breakers for external services
- Graceful shutdown: Handle graceful application shutdown
- Resource limits: Set appropriate CPU/memory limits
- Database migration validation: Validate migrations in staging first
- External service monitoring: Monitor dependencies (Stripe, SendGrid, etc.)
- Full API implementation with authentication
- Database schema and migrations
- Role-based access control
- Payment processing integration
- Basic error handling
- Docker containerization
- API documentation
- Environment configuration
- Basic security measures
- Health check endpoints
- Security hardening
- Comprehensive monitoring
- Production-grade testing
- Performance optimization
- Operational procedures
- Implement critical security features
- Add comprehensive logging and monitoring
- Set up error tracking and alerts
- Achieve comprehensive test coverage
- Perform load testing and optimization
- Implement caching and performance improvements
- Set up CI/CD pipeline
- Configure production infrastructure
- Document operational procedures
- Implement backup and disaster recovery
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
This project is licensed under the MIT License.
For support and questions:
- Create an issue in the repository
- Check the API documentation at
/docs
- Review the test files for usage examples
Built with β€οΈ using FastAPI and modern Python tools.