A minimal implementation of SelfDB with only the essential features: FastAPI backend, React + TypeScript frontend, and PostgreSQL database with PgBouncer connection pooling.
- Architecture
- Prerequisites
- Quick Start
- Development Setup
- SDK Generation
- Testing
- Backup & Restore
- Project Structure
- API Documentation
⚠️ Troubleshooting- Contributing
- License
- Learn More
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ │ │ │ │ │
│ React Frontend │────▶│ FastAPI Backend│────▶│ PostgreSQL │
│ (Vite + TS) │ │ (Python) │ │ + PgBouncer │
│ Port: 5173 │ │ Port: 8000 │ │ Port: 5433 │
│ │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
- Python 3.11+ with uv package manager
- Node.js 18+ with npm or pnpm
- Docker and Docker Compose (for database)
- PostgreSQL client (optional, for direct DB access)
Run the entire stack with a single command:
# Build and start all services
docker-compose up -d --build
# View logs
docker-compose logs -f
# Stop all services
docker-compose downThis starts:
- PostgreSQL on port
5433 - PgBouncer on port
6432(connection pooling) - Backend (FastAPI) on port
8000 - Frontend (React) on port
80
Access the application:
- Frontend: http://localhost
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
For development with hot-reload:
docker-compose up -d db pgbouncercd backend
uv run fastapi devThe API will be available at http://localhost:8000
- API Docs (Swagger): http://localhost:8000/docs
- API Docs (ReDoc): http://localhost:8000/redoc
- OpenAPI JSON: http://localhost:8000/openapi.json
cd frontend
npm install # or pnpm install
npm run dev # or pnpm devThe frontend will be available at http://localhost:5173
| Service | Image/Build | Port | Description |
|---|---|---|---|
| db | postgres:18 | 5433 | PostgreSQL database |
| pgbouncer | Custom build | 6432 | Connection pooling |
| backend | ./backend | 8000 | FastAPI application |
| frontend | ./frontend | 80 | React + Nginx |
# Build and start all services
docker-compose up -d --build
# Start only database services (for local development)
docker-compose up -d db pgbouncer
# Rebuild a specific service
docker-compose build backend
docker-compose up -d backend
# View logs for a specific service
docker-compose logs -f backend
# Execute command in running container
docker-compose exec backend uv run python -c "print('hello')"
# Stop all services
docker-compose down
# Stop and remove volumes (clean database)
docker-compose down -v
# View running services
docker-compose psThe database runs in Docker via docker-compose.yml:
# Start database services only
docker-compose up -d db pgbouncer
# View logs
docker-compose logs -f db
# Stop services
docker-compose down
# Stop and remove volumes (clean slate)
docker-compose down -vConnection Details:
| Service | Host | Port | User | Password | Database |
|---|---|---|---|---|---|
| PostgreSQL | localhost | 5433 | postgres | postgres | dayone |
| PgBouncer | localhost | 6432 | postgres | postgres | dayone |
cd backend
# Install dependencies
uv sync
# Run development server (with hot reload)
uv run fastapi dev
# Run production server
uv run fastapi run
# Generate OpenAPI spec
uv run python -c "from main import app; import json; print(json.dumps(app.openapi()))" > openapi.jsonEnvironment Variables:
Create a .env file in the backend directory if needed:
DATABASE_URL=postgresql+asyncpg://postgres:postgres@localhost:5433/dayone
API_KEY=Myapi-Key-for-devcd frontend
# Install dependencies
npm install
# Run development server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Lint code
npm run lintGenerate TypeScript client SDK from the OpenAPI spec for type-safe API calls.
cd backend
# Generate OpenAPI spec first
uv run python -c "from main import app; import json; print(json.dumps(app.openapi()))" > openapi.json
# Generate TypeScript SDK
npx -y @hey-api/openapi-ts \
-i openapi.json \
-o ../frontend/src/client \
-c @hey-api/client-fetchThe generated client will be in frontend/src/client/ with:
sdk.gen.ts- API functionstypes.gen.ts- TypeScript typesclient.gen.ts- HTTP client configuration
Generate SDKs for multiple languages using Swagger Codegen:
TypeScript:
docker run --rm -v ${PWD}:/local \
swaggerapi/swagger-codegen-cli-v3 generate \
-i /local/openapi.json \
-l typescript-fetch \
-o /local/sdks/swagger-codegen/typescriptPython:
docker run --rm -v ${PWD}:/local \
swaggerapi/swagger-codegen-cli-v3 generate \
-i /local/openapi.json \
-l python \
-o /local/sdks/swagger-codegen/pythonSwift:
docker run --rm -v ${PWD}:/local \
swaggerapi/swagger-codegen-cli-v3 generate \
-i /local/openapi.json \
-l swift5 \
-o /local/sdks/swagger-codegen/swiftSchemathesis automatically generates test cases from your OpenAPI schema to find bugs and edge cases.
cd backend
# Run all API contract tests
./run_schemathesis.sh
# Or run manually with more options
uv run schemathesis run http://localhost:8000/openapi.json \
--header "X-API-Key: Myapi-Key-for-dev" \
--checks all \
--stateful=links
# Generate a test report
uv run schemathesis run http://localhost:8000/openapi.json \
--header "X-API-Key: Myapi-Key-for-dev" \
--reportWhat it tests:
- ✅ Response schema validation
- ✅ Status code correctness
- ✅ Content-type headers
- ✅ Edge cases (empty strings, nulls, special characters)
- ✅ Stateful testing (API workflow sequences)
Apache Bench (ab) performs quick HTTP load tests.
cd backend
# Run with defaults (100 requests, 10 concurrent)
./ab_benchmark.sh
# Custom load test
./ab_benchmark.sh -n 500 -c 25
# Quick smoke test
./ab_benchmark.sh --quick
# Stress test (1000 requests, 100 concurrent)
./ab_benchmark.sh --stress
# Test against different host
./ab_benchmark.sh -h http://api.example.com
# Show help
./ab_benchmark.sh --helpOptions:
| Flag | Description | Default |
|---|---|---|
-n, --requests |
Total number of requests | 100 |
-c, --concurrency |
Concurrent connections | 10 |
-h, --host |
API host URL | http://127.0.0.1:8000 |
--quick |
Quick test (50 req, 5 concurrent) | - |
--stress |
Stress test (1000 req, 100 concurrent) | - |
Output includes:
- Requests per second
- Time per request (latency)
- Failed requests count
- Summary table of all endpoints
Locust provides a web UI for interactive load testing with realistic user behavior simulation.
cd backend
# Start Locust with web UI
uv run locust -f locustfile.py --host=http://localhost:8000
# Then open http://localhost:8089 in your browserHeadless mode (CI/CD):
# Run for 1 minute with 100 users, spawning 10/second
uv run locust -f locustfile.py \
--host=http://localhost:8000 \
--users 100 \
--spawn-rate 10 \
--run-time 1m \
--headless
# Quick smoke test
uv run locust -f locustfile.py \
--host=http://localhost:8000 \
-u 10 -r 5 \
--run-time 30s \
--headless \
QuickSmokeTestUser Types:
| User Class | Description | Weight |
|---|---|---|
AuthenticatedAPIUser |
Full CRUD on all resources | 3 |
PublicAPIUser |
Public endpoints only | 1 |
QuickSmokeTest |
Rapid-fire test (explicit only) | 0 |
Web UI Features:
- Real-time charts (RPS, response times, failures)
- Per-endpoint statistics
- Download test reports
- Adjustable user count during test
SelfDB-mini includes a comprehensive backup system for disaster recovery and server migration.
| Feature | Description |
|---|---|
| Storage Location | ./backups/ folder in project root |
| Format | .tar.gz archive containing database dump + .env |
| Scheduling | Configurable via cron expression |
| Retention | Automatic cleanup of old backups |
What's included in a backup:
database.sql- Full PostgreSQL database dump.env- Configuration file snapshot
Set these variables in your .env file:
# Backup retention period (days)
BACKUP_RETENTION_DAYS=7
# Backup schedule (cron format: minute hour day month weekday)
# Default: Daily at 2:00 AM
BACKUP_SCHEDULE_CRON=0 2 * * *Cron Examples:
| Schedule | Cron Expression |
|---|---|
| Daily at 2 AM | 0 2 * * * |
| Every 6 hours | 0 */6 * * * |
| Weekly on Sunday at 3 AM | 0 3 * * 0 |
| Every 12 hours | 0 0,12 * * * |
- Login as an admin user
- Navigate to Backups page (in sidebar)
- Click Create Backup
- Download backups directly from the list
Backups run automatically based on BACKUP_SCHEDULE_CRON. The scheduler starts with the backend service.
View scheduled backup logs:
docker compose logs -f backend | grep -i backupBackup files are stored in:
./backups/
├── dayone_backup_20251127_020000.tar.gz
├── dayone_backup_20251126_020000.tar.gz
└── ...
For headless servers or when you prefer the command line:
# List available backups
./restore_from_backup.sh
# Restore the most recent backup
./restore_from_backup.sh latest
# Restore a specific backup
./restore_from_backup.sh dayone_backup_20251127_113057.tar.gzExample output:
═══════════════════════════════════════════════════════════════
Day-One Backup Restore Tool
═══════════════════════════════════════════════════════════════
Available backups in ./backups/:
# | Filename | Size | Date
-----+---------------------------------------+-----------+-------------------
1 | dayone_backup_20251127_113057.tar.gz | 420K | 2025-11-27 11:30:57
2 | dayone_backup_20251126_020000.tar.gz | 415K | 2025-11-26 02:00:00
Usage: ./restore_from_backup.sh <backup-filename>
./restore_from_backup.sh latest # Restore the most recent backup
When deploying to a new server, you can restore from backup via the login page:
- Fresh install - Deploy SelfDB-mini to new server (no users exist yet)
- Copy backup - Place your
.tar.gzbackup in the./backups/folder - Open login page - You'll see a "Restore from Backup" option
- Upload & restore - Select your backup file and confirm
⚠️ Note: The restore option on the login page disappears after the first user logs in. This is a security feature to prevent unauthorized data overwrites.
Backups are stored in ./backups/ which is a local folder mount (not a Docker volume). This makes it easy to:
- Access directly - Browse backups in your file manager
- Set up SMB/NFS share - Share the
backups/folder over your network - Sync to cloud - Use rsync, rclone, or cloud sync tools
- Offsite backup - Copy to external drives or remote servers
Example: Sync to remote server:
rsync -avz ./backups/ user@backup-server:/backups/dayone/Example: Sync to S3:
aws s3 sync ./backups/ s3://my-bucket/dayone-backups/selfdb-mini/
├── docker-compose.yml # Full stack services
├── README.md # This file
├── restore_from_backup.sh # CLI restore tool
├── .env # Environment configuration
│
├── backups/ # Backup storage (auto-created)
│ └── dayone_backup_*.tar.gz
│
├── backend/ # FastAPI Backend
│ ├── main.py # Application entry point
│ ├── db.py # Database connection
│ ├── security.py # Authentication & authorization
│ ├── pyproject.toml # Python dependencies
│ ├── openapi.json # Generated OpenAPI spec
│ │
│ ├── endpoints/ # API route handlers
│ │ ├── users.py # User CRUD endpoints
│ │ └── tables.py # Table/data CRUD endpoints
│ │
│ ├── models/ # Pydantic models
│ │ ├── user.py # User schemas
│ │ └── table.py # Table schemas
│ │
│ ├── services/ # Business logic services
│ │ └── backup_service.py # Backup/restore operations
│ │
│ ├── locustfile.py # Locust load tests
│ ├── ab_benchmark.sh # Apache Bench tests
│ └── run_schemathesis.sh # API contract tests
│
├── frontend/ # React Frontend
│ ├── src/
│ │ ├── App.tsx # Main app component
│ │ ├── main.tsx # Entry point
│ │ │
│ │ ├── client/ # Generated API client (SDK)
│ │ │ ├── sdk.gen.ts # API functions
│ │ │ └── types.gen.ts# TypeScript types
│ │ │
│ │ ├── components/ # Reusable UI components
│ │ ├── context/ # React context (auth, etc.)
│ │ ├── lib/ # Utilities & constants
│ │ └── pages/ # Page components
│ │
│ ├── package.json # Node dependencies
│ ├── vite.config.ts # Vite configuration
│ ├── tailwind.config.js # Tailwind CSS config
│ └── tsconfig.json # TypeScript config
│
└── pgbouncer-1.25.0/ # PgBouncer source (for Docker build)
When the backend is running, access the interactive API documentation:
| URL | Description |
|---|---|
| http://localhost:8000/docs | Swagger UI (interactive) |
| http://localhost:8000/redoc | ReDoc (read-only) |
| http://localhost:8000/openapi.json | OpenAPI JSON spec |
Authentication:
- All requests require
X-API-Key: Myapi-Key-for-devheader - Protected endpoints also require
Authorization: Bearer <token>header - Get a token via
POST /users/tokenwith email/password
⚠️ Warning: The steps in this section involve modifying files and rebuilding containers. Make sure to backup any custom configurations before proceeding.
If the PgBouncer container fails to build or start, you can manually download the source and rebuild:
-
Download the PgBouncer source tarball:
wget https://www.pgbouncer.org/downloads/files/1.25.0/pgbouncer-1.25.0.tar.gz
-
Extract the archive:
tar -xzf pgbouncer-1.25.0.tar.gz
-
Copy the Dockerfile and entrypoint script from the existing folder:
cp pgbouncer-1.25.0/Dockerfile pgbouncer-1.25.0-new/Dockerfile cp pgbouncer-1.25.0/docker-entrypoint.sh pgbouncer-1.25.0-new/docker-entrypoint.sh
Or replace the existing folder entirely:
# Backup existing Docker files cp pgbouncer-1.25.0/Dockerfile /tmp/Dockerfile.bak cp pgbouncer-1.25.0/docker-entrypoint.sh /tmp/docker-entrypoint.sh.bak # Remove old folder and rename new one rm -rf pgbouncer-1.25.0 mv pgbouncer-1.25.0-extracted pgbouncer-1.25.0 # Restore Docker files cp /tmp/Dockerfile.bak pgbouncer-1.25.0/Dockerfile cp /tmp/docker-entrypoint.sh.bak pgbouncer-1.25.0/docker-entrypoint.sh
-
Rebuild the PgBouncer container:
docker-compose build pgbouncer docker-compose up -d pgbouncer
We welcome contributions from the community! Whether it's bug fixes, new features, documentation improvements, or suggestions — all contributions are appreciated.
How to contribute:
- Fork the repository - Click the "Fork" button on GitHub
- Clone your fork -
git clone https://github.com/YOUR_USERNAME/selfdb-mini.git - Create a branch -
git checkout -b feature/your-feature-name - Make your changes - Write code, tests, and documentation
- Commit your changes -
git commit -m "Add: your feature description" - Push to your fork -
git push origin feature/your-feature-name - Open a Pull Request - Submit your PR with a clear description
Guidelines:
- Follow the existing code style and conventions
- Write clear commit messages
- Add tests for new features when applicable
- Update documentation as needed
For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License - see the LICENSE file for details.
MIT License - Copyright (c) 2025 SelfDB
You are free to use, modify, and distribute this software for any purpose.





