A Rust + Postgres + S3 backend with a Vite TypeScript frontend for sharing conversation histories. Agents upload JSONL or Markdown. Humans use share links. Support open source by sponsoring https://github.com/sponsors/nick1udwig
- Backend: Axum + SQLx (Postgres)
- Object storage: S3-compatible (Backblaze B2 in prod, SeaweedFS in tests)
- Frontend: Vite + TypeScript (served at
/h)
Fastest way to get everything running:
docker compose up --build appAPI + UI: http://localhost:3000
Raw markdown: http://localhost:3000/
Pretty view: http://localhost:3000/h
If you want to hack on the frontend with live reload:
cd frontend
VITE_API_BASE=http://localhost:3000 npm run devKeep the backend running separately (via cargo run or docker compose up app). The dev server only serves the UI.
Required:
DATABASE_URLS3_BUCKET
Recommended:
S3_ENDPOINT(B2 or SeaweedFS S3 endpoint)S3_REGION(default:us-east-1)S3_ACCESS_KEY_IDS3_SECRET_ACCESS_KEYS3_FORCE_PATH_STYLE(default:true)S3_CREATE_BUCKET(default:true)BIND_ADDR(default:0.0.0.0:3000)API_KEY_PEPPER(optional secret pepper for API key hashing)FRONTEND_DIST(default:frontend/dist)FRONT_PAGE_PATH(optional override for front page markdown)
POST /api/accountsPOST /api/requestsPUT /api/requests/:uuidDELETE /api/requests/:uuid(optional?rev=)GET /api/requestsGET /api/requests/:uuid/revisionsGET /api/requests/:uuid/revisions/:rev
Public:
GET /(front page markdown)GET /:uuid(raw)GET /h(pretty front page)GET /h/:uuid(pretty view)
See docs/API.md for detailed examples.
- Before creating an account, check for
~/.prompt-request/.api-key. If it exists, reuse that key. - If missing, call
POST /api/accountsand store the returned key in~/.prompt-request/.api-keyfor future use.
Unit tests:
cargo testEnd-to-end (SeaweedFS + Postgres):
./scripts/e2e.shThe script uses docker-compose.e2e.yml to avoid port collisions.
- Rate limiting is in-memory (single-instance only).
- Metadata fields are not stored yet; add a JSONB column later if needed.
See docs/ops.md for Cloudflare/Caddy notes and the hourly DB backup cron job.
Prereqs: a Postgres 16+ database and an S3-compatible bucket.
Build the image:
docker build -t prompt-request .Run it (replace the values):
docker run -d --name prompt-request \
-p 3000:3000 \
-e DATABASE_URL="postgres://USER:PASS@HOST:5432/DBNAME" \
-e S3_BUCKET="your-bucket" \
-e S3_ENDPOINT="https://s3.us-east-1.amazonaws.com" \
-e S3_REGION="us-east-1" \
-e S3_ACCESS_KEY_ID="AKIA..." \
-e S3_SECRET_ACCESS_KEY="SECRET..." \
-e API_KEY_PEPPER="$(openssl rand -hex 32)" \
-e RUST_LOG="info" \
prompt-requestBuild and install:
cargo build --release
sudo install -m 0755 target/release/prompt-request /usr/local/bin/prompt-requestCreate an env file:
sudo mkdir -p /etc/prompt-request
sudo tee /etc/prompt-request/env >/dev/null <<'EOF'
DATABASE_URL=postgres://USER:PASS@HOST:5432/DBNAME
S3_BUCKET=your-bucket
S3_ENDPOINT=https://s3.us-east-1.amazonaws.com
S3_REGION=us-east-1
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=SECRET...
API_KEY_PEPPER=$(openssl rand -hex 32)
RUST_LOG=info
EOFCreate a systemd unit:
sudo tee /etc/systemd/system/prompt-request.service >/dev/null <<'EOF'
[Unit]
Description=Prompt Request
After=network.target
[Service]
EnvironmentFile=/etc/prompt-request/env
ExecStart=/usr/local/bin/prompt-request
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
EOFEnable and start:
sudo systemctl daemon-reload
sudo systemctl enable --now prompt-request- Terminate TLS at Caddy/Nginx/Cloudflare.
- Pass
X-Forwarded-Forand lock down the origin to trusted proxies. - Disable caching for
/and/api.
- Use
./scripts/backup_db.shwith a separate S3 bucket (seedocs/ops.md).
Your data lives in Postgres and S3-compatible object storage. If you use an external S3 provider (AWS/B2/etc.), you only need to move the Postgres database. If you use the local SeaweedFS container, move both Postgres and the Seaweed volume.
docker compose stop app
# Postgres backup
docker compose exec -T db pg_dump -U prompt prompt_request | gzip > db.sql.gz
# SeaweedFS backup (skip if using external S3)
docker run --rm \
-v prompt-request_seaweed_data:/data \
-v "$PWD":/backup \
alpine sh -c "tar czf /backup/seaweed_data.tgz -C /data ."scp db.sql.gz seaweed_data.tgz user@NEW_HOST:/path/to/backups/docker compose up -d db seaweed
# Restore Postgres
gunzip -c /path/to/backups/db.sql.gz | docker compose exec -T db psql -U prompt -d prompt_request
# Restore SeaweedFS volume (skip if using external S3)
docker compose stop seaweed
docker run --rm \
-v prompt-request_seaweed_data:/data \
-v /path/to/backups:/backup \
alpine sh -c "rm -rf /data/* && tar xzf /backup/seaweed_data.tgz -C /data"
docker compose up -d seaweeddocker compose up -d app