Backup and restore PostgreSQL to S3/R2/S3-compatible storage with scheduled backups, encryption, and multiple database support.
- ๐๏ธ Multiple Database Support - Backup multiple databases in one container
- โ๏ธ Multi-Storage Support - AWS S3, Cloudflare R2, or any S3-compatible service
- โฐ Scheduled Backups - Flexible cron scheduling
- ๐ Encryption - AES-256-CBC encryption support
- ๐๏ธ Compression - Built-in gzip/pigz compression or PostgreSQL custom format
- ๐งน Auto Cleanup - Automatic deletion of old backups
- โก Parallel Backups - Optional parallel backup for multiple databases
- ๐ Fast & Lightweight - Built with Bun runtime
version: '3.8'
services:
postgres:
image: postgres:17-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- "5432:5432"
postgres-backup:
image: ghcr.io/johnnybui/postgres-backup-s3
depends_on:
- postgres
environment:
# Storage Configuration
STORAGE_TYPE: S3
S3_REGION: ap-southeast-1
S3_BUCKET: my-db-backups
S3_PREFIX: postgres
S3_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
S3_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
# Postgres Configuration
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_DATABASE: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
# Backup Schedule
SCHEDULE: "@daily" # or "0 2 * * *" for 2 AM daily
volumes:
postgres-data:Run it:
docker compose up -denvironment:
STORAGE_TYPE: S3
S3_REGION: us-west-1
S3_BUCKET: my-backups
S3_ACCESS_KEY_ID: AKIAXXXXXXXX
S3_SECRET_ACCESS_KEY: xxxxxxxxxxenvironment:
STORAGE_TYPE: R2
R2_ACCOUNT_ID: 1234567890abcdef1234567890abcdef # Example Cloudflare Account ID (32 chars)
S3_BUCKET: my-backups
S3_ACCESS_KEY_ID: your-r2-key
S3_SECRET_ACCESS_KEY: your-r2-secretWorks with Minio, DigitalOcean Spaces, Wasabi, Backblaze B2, and more.
environment:
STORAGE_TYPE: COMPATIBLE
S3_ENDPOINT: https://sgp1.digitaloceanspaces.com
S3_REGION: sgp1
S3_BUCKET: my-spaces-bucket
S3_ACCESS_KEY_ID: your-key
S3_SECRET_ACCESS_KEY: your-secretversion: '3.8'
services:
minio:
image: minio/minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- "9000:9000"
- "9001:9001"
volumes:
- minio-data:/data
postgres-backup:
image: ghcr.io/johnnybui/postgres-backup-s3
depends_on:
- minio
environment:
STORAGE_TYPE: COMPATIBLE
S3_ENDPOINT: http://minio:9000
S3_REGION: us-east-1
S3_BUCKET: postgres-backups
S3_ACCESS_KEY_ID: minioadmin
S3_SECRET_ACCESS_KEY: minioadmin
POSTGRES_HOST: postgres
POSTGRES_DATABASE: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
SCHEDULE: "@hourly"
volumes:
minio-data:| Provider | STORAGE_TYPE | Required Env Vars | Notes |
|---|---|---|---|
| AWS S3 | S3 |
S3_REGION, S3_BUCKET, credentials |
Standard AWS S3, most reliable |
| Cloudflare R2 | R2 |
R2_ACCOUNT_ID, S3_BUCKET, credentials |
Cheaper, no egress fees |
| Minio | COMPATIBLE |
S3_ENDPOINT, S3_REGION, S3_BUCKET, credentials |
Self-hosted, great for local dev |
| DigitalOcean Spaces | COMPATIBLE |
S3_ENDPOINT, S3_REGION, S3_BUCKET, credentials |
Regional endpoints |
| Wasabi | COMPATIBLE |
S3_ENDPOINT, S3_REGION, S3_BUCKET, credentials |
Cheaper storage alternative |
| Backblaze B2 | COMPATIBLE |
S3_ENDPOINT, S3_REGION, S3_BUCKET, credentials |
Via S3-compatible API |
Backup multiple databases in a single container by providing a comma-separated list:
environment:
# Multiple databases (comma-separated)
POSTGRES_DATABASE: "app_db,analytics_db,logs_db"
# Optional: parallel backup for speed (default: no)
PARALLEL_BACKUP: "no" # or "yes"Sequential (default):
- Safer, less load on database
- Backups run one after another
- Easier to debug if errors occur
Parallel:
- Faster for multiple large databases
- Higher load on database server
- Set
PARALLEL_BACKUP=yesto enable
environment:
POSTGRES_DATABASE: "all" # Backup all databasesThe SCHEDULE variable uses cron syntax:
# Predefined schedules
SCHEDULE: "@hourly" # Every hour
SCHEDULE: "@daily" # Every day at midnight
SCHEDULE: "@weekly" # Every week
SCHEDULE: "@monthly" # Every month
# Custom cron syntax (minute hour day month weekday)
SCHEDULE: "0 2 * * *" # Every day at 2 AM
SCHEDULE: "0 */6 * * *" # Every 6 hours
SCHEDULE: "0 3 * * 0" # Every Sunday at 3 AM
SCHEDULE: "30 4 1 * *" # 1st day of month at 4:30 AMLeave empty or set to **None** for one-time backup.
Encrypt backups with AES-256-CBC:
environment:
ENCRYPTION_PASSWORD: "your-super-strong-password"Encrypted backups will have .enc extension and require the same password for restore.
environment:
USE_CUSTOM_FORMAT: "no"
COMPRESSION_CMD: "gzip" # or "pigz" for parallel compression
DECOMPRESSION_CMD: "gunzip -c" # or "pigz -dc"environment:
USE_CUSTOM_FORMAT: "yes"
PARALLEL_JOBS: 4 # For faster parallel restoreBenefits:
- Faster backups
- Smaller backup files
- Supports parallel restoration
- Allows selective table/schema restore
Note: Custom format not available with POSTGRES_DATABASE=all
Automatically delete backups older than specified time:
environment:
DELETE_OLDER_THAN: "30 days ago" # or "7 days ago", "1 week ago", etc.S3_PREFIX path, not just backups created by this tool.
You can manually trigger a backup at any time (useful for testing or on-demand backups):
# Backup databases configured in POSTGRES_DATABASE env var
docker compose exec postgres-backup /backup.sh
# Override to backup specific databases (comma-separated)
docker compose exec -e POSTGRES_DATABASE="db1,db2,db3" postgres-backup /backup.sh
# Override to backup specific databases (space-separated)
docker compose exec -e POSTGRES_DATABASE="db1 db2 db3" postgres-backup /backup.sh
# From host machine (if container is named 'postgres-backup')
docker exec postgres-backup /backup.sh
# Override from host
docker exec -e POSTGRES_DATABASE="specific_db" postgres-backup /backup.sh# Unset SCHEDULE to run one-time backup via scheduler
docker compose run --rm -e SCHEDULE="" postgres-backup
# Or override specific databases
docker compose run --rm \
-e SCHEDULE="" \
-e POSTGRES_DATABASE="db1,db2" \
postgres-backupNotes:
- Method 1 calls the backup script directly in the running container (instant)
- Method 2 starts a new container, runs backup, then exits (slower startup)
- The backup script automatically detects and loops through multiple databases
- Works with both comma-separated and space-separated database lists
- Each database is backed up sequentially with clear progress indicators
- If
SCHEDULEis set and you call the binary directly, it will wait for cron (not instant)
Use docker compose run to restore a specific backup:
docker compose run --rm \
-e BACKUP_FILE="postgres/app_db_2025-09-27T03:51:37Z.sql.gz" \
-e POSTGRES_DATABASE="app_db" \
postgres-backupAll other environment variables (storage config, credentials) are inherited from docker-compose.yml.
# Use AWS CLI to list backups
docker compose run --rm postgres-backup \
sh -c 'aws $AWS_ARGS s3 ls s3://$S3_BUCKET/$S3_PREFIX/'docker compose run --rm \
-e BACKUP_FILE="postgres/mydb_2025-09-27T03:51:37Z.sql.gz" \
-e POSTGRES_DATABASE="mydb" \
-e DROP_DATABASE="yes" \
-e CREATE_DATABASE="yes" \
-e POSTGRES_EXTRA_OPTS="" \
postgres-backupdocker compose run --rm \
-e BACKUP_FILE="postgres/prod_db_2025-09-27T03:51:37Z.sql.gz" \
-e POSTGRES_DATABASE="staging_db" \
-e CREATE_DATABASE="yes" \
postgres-backup# Encryption password inherited from docker-compose.yml
docker compose run --rm \
-e BACKUP_FILE="postgres/mydb_2025-09-27T03:51:37Z.sql.gz.enc" \
-e POSTGRES_DATABASE="mydb" \
postgres-backupdocker compose run --rm \
-e BACKUP_FILE="postgres/app_db_2025-09-27T03:51:37Z.sql.gz" \
-e POSTGRES_DATABASE="app_db" \
-e POSTGRES_HOST="prod-db.example.com" \
-e POSTGRES_USER="prod_user" \
-e POSTGRES_PASSWORD="prod_password" \
postgres-backup- Set
POSTGRES_EXTRA_OPTS=""if backup was created with--cleanflag - Use
CREATE_DATABASE=yeswhen restoring to non-existent database - Use
DROP_DATABASE=yeswith caution - destroys existing data - Encrypted backups require same
ENCRYPTION_PASSWORD
| Variable | Default | Required | Description |
|---|---|---|---|
STORAGE_TYPE |
S3 |
No | Storage type: S3, R2, or COMPATIBLE |
S3_BUCKET |
Yes | Bucket name | |
S3_PREFIX |
backup |
No | Path prefix in bucket |
S3_ACCESS_KEY_ID |
Yes | Access key | |
S3_SECRET_ACCESS_KEY |
Yes | Secret key |
| Variable | Default | Required | Description |
|---|---|---|---|
S3_REGION |
us-west-1 |
No | AWS region |
| Variable | Default | Required | Description |
|---|---|---|---|
R2_ACCOUNT_ID |
Yes | Cloudflare account ID |
| Variable | Default | Required | Description |
|---|---|---|---|
S3_ENDPOINT |
Yes | Full endpoint URL | |
S3_REGION |
us-east-1 |
No | Region |
| Variable | Default | Required | Description |
|---|---|---|---|
POSTGRES_DATABASE |
Yes | Database name(s), comma-separated, or all |
|
POSTGRES_HOST |
Yes | PostgreSQL host | |
POSTGRES_PORT |
5432 |
No | PostgreSQL port |
POSTGRES_USER |
Yes | PostgreSQL user | |
POSTGRES_PASSWORD |
Yes | PostgreSQL password | |
POSTGRES_EXTRA_OPTS |
No | Extra pg_dump/psql options (e.g., --clean --if-exists) |
| Variable | Default | Required | Description |
|---|---|---|---|
SCHEDULE |
No | Cron schedule or @daily, @hourly, etc. |
|
PARALLEL_BACKUP |
no |
No | Parallel backup for multiple databases (yes/no) |
ENCRYPTION_PASSWORD |
No | Password for encryption | |
DELETE_OLDER_THAN |
No | Auto-delete backups older than this (e.g., 30 days ago) |
|
USE_CUSTOM_FORMAT |
no |
No | Use PostgreSQL custom format (yes/no) |
COMPRESSION_CMD |
gzip |
No | Compression command (e.g., pigz) |
DECOMPRESSION_CMD |
gunzip -c |
No | Decompression command |
| Variable | Default | Required | Description |
|---|---|---|---|
BACKUP_FILE |
For restore | Path to backup file in bucket (e.g., backup/db_2025-09-27.sql.gz) |
|
CREATE_DATABASE |
no |
No | Create database if not exists (yes/no) |
DROP_DATABASE |
no |
No | Drop database before restore (yes/no) |
PARALLEL_JOBS |
1 |
No | Parallel jobs for pg_restore with custom format |
| Variable | Default | Required | Description |
|---|---|---|---|
S3_S3V4 |
no |
No | Use AWS Signature Version 4 (for Minio, set to yes) |
version: '3.8'
services:
postgres:
image: postgres:17-alpine
restart: unless-stopped
environment:
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- db-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
postgres-backup:
image: ghcr.io/johnnybui/postgres-backup-s3
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
# Storage - Cloudflare R2
STORAGE_TYPE: R2
R2_ACCOUNT_ID: ${R2_ACCOUNT_ID}
S3_BUCKET: ${BACKUP_BUCKET}
S3_PREFIX: postgres-prod
S3_ACCESS_KEY_ID: ${R2_ACCESS_KEY}
S3_SECRET_ACCESS_KEY: ${R2_SECRET_KEY}
# Multiple Databases
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_DATABASE: "app_production,analytics,logs"
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_EXTRA_OPTS: "--clean --if-exists"
# Backup Settings
SCHEDULE: "0 3 * * *" # 3 AM daily
PARALLEL_BACKUP: "no"
USE_CUSTOM_FORMAT: "yes"
ENCRYPTION_PASSWORD: ${BACKUP_ENCRYPTION_KEY}
DELETE_OLDER_THAN: "30 days ago"
networks:
- db-network
networks:
db-network:
driver: bridge
volumes:
postgres-data:Create .env file:
DB_USER=postgres
DB_PASSWORD=super-secure-password
R2_ACCOUNT_ID=abc123def456
R2_ACCESS_KEY=your-r2-key
R2_SECRET_KEY=your-r2-secret
BACKUP_BUCKET=prod-db-backups
BACKUP_ENCRYPTION_KEY=ultra-secure-encryption-key# Install dependencies
bun install
# Run locally
bun run src/index.ts
# Build
bun run builddocker build -f Dockerfile -t postgres-backup-s3 .MIT License
Copyright for portions of project postgres-backup-s3 are held by ITBM, 2019 as part of project postgresql-backup-s3. All other copyright for project postgres-backup-s3 are held by johnnybui, 2025.
See LICENSE for full details.
This project is inspired by and based on itbm/postgresql-backup-s3.
- โ Built with Bun + TypeScript (fast, modern)
- โ Multiple database support
- โ Native Cloudflare R2 support
- โ S3-compatible service support (Minio, DigitalOcean Spaces, etc.)
- โ Parallel backup option
- โ Comprehensive error handling
- โ Production-ready Docker Compose examples
Contributions welcome! Please feel free to submit a Pull Request.
For issues, questions, or feature requests, please open an issue on GitHub.