A minimal public chat-room application. Real-time messages delivered via Server-Sent Events, OAuth login (GitHub, Google, Discord), and link previews — no JavaScript framework required.
- Go — HTTP server, business logic
- HTMX — reactive UI without a JS framework
- Redis — sole data store (messages, sessions, pub/sub)
- SSE — real-time message fan-out
- Go 1.25+
- Docker (for Redis)
- OAuth credentials for at least one provider (GitHub, Google, or Discord)
docker run -d -p 6379:6379 redis:alpineOr use the included Compose file, which also spins up RedisInsight:
docker compose up redis -dcp .env.example .envFill in .env:
| Variable | Description |
|---|---|
REDIS_URL |
Redis connection URL, e.g. redis://localhost:6379 |
SESSION_SECRET |
Random 32-byte hex string — openssl rand -hex 32 |
BASE_URL |
Public-facing base URL, e.g. http://localhost:8080 |
PORT |
HTTP port (default 8080) |
GITHUB_CLIENT_ID / GITHUB_CLIENT_SECRET |
GitHub OAuth app |
GOOGLE_CLIENT_ID / GOOGLE_CLIENT_SECRET |
Google OAuth credentials |
DISCORD_CLIENT_ID / DISCORD_CLIENT_SECRET |
Discord application |
OPEN_REGISTRATION |
true = anyone may log in; false (default) = allowlist only |
ALLOW_LIST |
Comma-separated emails allowed when OPEN_REGISTRATION=false |
MICROLINK_API_KEY |
Optional — only needed above the free tier |
S3_ENDPOINT |
S3-compatible endpoint, e.g. https://s3.example.com — omit to disable media uploads |
S3_BUCKET |
Bucket name for uploaded media |
S3_REGION |
Region string (MinIO accepts any value, e.g. us-east-1) |
S3_ACCESS_KEY_ID |
S3 access key ID |
S3_SECRET_ACCESS_KEY |
S3 secret access key |
OAuth callback URLs to register with each provider:
http://localhost:8080/auth/github/callback
http://localhost:8080/auth/google/callback
http://localhost:8080/auth/discord/callback
Media uploads (paste to send images/video) require an S3-compatible object store such as MinIO. The steps below use mc (the MinIO CLI client) and assume:
- MinIO is reachable at
https://s3.example.com - The app is at
https://msg.example.com
Install mc:
# macOS
brew install minio/stable/mc
# Linux
curl https://dl.min.io/client/mc/release/linux-amd64/mc \
-o /usr/local/bin/mc && chmod +x /usr/local/bin/mcOn Arch Linux the package is
mcli(naming conflict with Midnight Commander). SetMC_CONFIG_DIR="${XDG_CONFIG_HOME}/mc"to keep config out of~/.mc.
Register your MinIO instance as an alias:
mc alias set myminio https://s3.example.com ACCESS_KEY_ID SECRET_ACCESS_KEYCreate the bucket and make it publicly readable:
mc mb myminio/msg-media
mc anonymous set download myminio/msg-mediaApply the CORS policy
For MinIO; this can be done by setting the env var:
service:
storage:
environment:
MINIO_API_CORS_ALLOWED_ORIGINS: "http://localhost:8080"Add to .env:
S3_ENDPOINT=https://s3.example.com
S3_BUCKET=msg-media
S3_REGION=us-east-1
S3_ACCESS_KEY_ID=your_access_key_id
S3_SECRET_ACCESS_KEY=your_secret_access_keyS3_REGION can be any non-empty string; MinIO ignores it. If S3_ENDPOINT is not set the upload route is not registered and the paste-to-upload handler silently does nothing.
export $(grep -v '^#' .env | xargs)
go run ./...Open http://localhost:8080.
The project ships with an Air config for hot-reload during development:
go install github.com/air-verse/air@latest
air -c .air.tomlOr start everything via Docker Compose (uses Air inside the container):
docker compose upNo external services required — tests use an in-process Redis (miniredis).
go test ./... -race -timeout 60s -count=1 -shortnpm run lintRequires headless Chromium (go-rod will find it automatically if installed).
go test ./internal/browser/... -v -timeout 120sTo run with a visible browser window — useful for debugging or watching tests execute:
HEADLESS=false go test ./internal/browser/... -v -timeout 120sTo run a single test:
HEADLESS=false go test ./internal/browser/... -v -timeout 120s -run TestThemeToggle_DarkOSmake testThis runs lint, unit/integration tests, and E2E browser tests in sequence.
Production images are published to the GitHub Container Registry on every push:
ghcr.io/emilhauk/msg:<branch>
ghcr.io/emilhauk/msg:<short-sha>
Pull and run:
docker run -p 8080:8080 --env-file .env \
-e REDIS_URL=redis://your-redis:6379 \
ghcr.io/emilhauk/msg:main.
├── main.go # Entry point: routes, server startup, room seeding
├── internal/
│ ├── auth/ # OAuth flow and signed-cookie sessions
│ ├── handler/ # HTTP handlers (rooms, messages, SSE)
│ ├── middleware/ # Session validation
│ ├── model/ # Shared structs
│ ├── redis/ # Typed Redis helpers
│ └── tmpl/ # Template rendering
└── web/
├── templates/ # HTML templates (base layout, room, message partials)
└── static/ # CSS
msg uses Server-Sent Events for real-time delivery. SSE connections are long-lived and streaming, which requires specific reverse proxy configuration — the defaults are typically wrong.
Without correct configuration you may see ERR_HTTP2_PROTOCOL_ERROR in the browser console and messages silently failing to appear (until the connection recovers on reconnect).
The SSE endpoint (/rooms/*/events) needs its own location block with buffering disabled and an extended read timeout. The upstream connection must use HTTP/1.1 (nginx defaults to HTTP/1.0 for proxied requests, which does not support keep-alive streaming).
server {
listen 443 ssl;
http2 on;
# ... ssl_certificate, server_name, etc.
# SSE endpoint — disable buffering, extend timeout, use HTTP/1.1 upstream
location ~ ^/rooms/[^/]+/events$ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Connection '';
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 3600s;
chunked_transfer_encoding on;
}
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}For other reverse proxies (Caddy, Traefik, HAProxy, etc.) consult their documentation for SSE or long-lived streaming connections — the same principles apply: disable response buffering and set a long (or unlimited) upstream read timeout.
- SSE reconnect recovery — on reconnect, the client fetches the 50 newest messages and merges them into the view. If more than 50 messages were sent during the gap, only the 50 most recent are restored; earlier messages in the gap are not surfaced automatically but remain accessible via scrollback.