A self-hosted AI chat application that lets you compare responses from multiple AI providers side-by-side. Built with SvelteKit, PostgreSQL, and the Vercel AI SDK.
- Multi-pane chat — send a single prompt and stream responses from different models simultaneously
- Model locking — panes lock to their selected model on first message for consistent comparison
- Conversation history — full persistence with per-model response storage
- API key management — encrypted at rest (AES-256-GCM), configured per provider through the UI
- Auth — email/password authentication with session management
- Local + cloud providers — mix local models (Ollama, LM Studio) with cloud APIs
- Docker support — single
docker compose upwith bundled PostgreSQL
Ollama — Local
Run open-source models locally. No API key required.
- Base URL:
http://localhost:11434(configurable viaOLLAMA_BASE_URL) - Setup: ollama.com — install and pull any model (e.g.
ollama pull llama3) - Models are auto-discovered from your running Ollama instance
LM Studio — Local
Run local models through LM Studio's OpenAI-compatible server. No API key required.
- Base URL:
http://localhost:1234(configurable viaLMSTUDIO_BASE_URL) - Setup: lmstudio.ai — download models and start the local server
- Models are auto-discovered from your running LM Studio instance
OpenAI — Cloud
Access GPT-4o, GPT-4, and other OpenAI models.
- API Key: Required (
OPENAI_API_KEYor configure in Settings) - Docs: platform.openai.com
Google Gemini — Cloud
Access Gemini Pro, Flash, and other Google models.
- API Key: Required (
GOOGLE_API_KEYor configure in Settings) - Docs: ai.google.dev
DeepSeek — Cloud
Access DeepSeek-V3, DeepSeek-Coder, and other models.
- API Key: Required (
DEEPSEEK_API_KEYor configure in Settings) - Docs: platform.deepseek.com
OpenRouter — Cloud
Access hundreds of models from multiple providers through a single API.
- API Key: Required (
OPENROUTER_API_KEYor configure in Settings) - Docs: openrouter.ai
# Clone the repository
git clone https://github.com/<your-username>/chatsync.git
cd chatsync
# Copy environment file and configure
cp .env.example .env
# Generate an auth secret
openssl rand -base64 32
# Add the output to AUTH_SECRET in .env
# Start with Docker (includes PostgreSQL)
docker compose up -d
# Or run locally
pnpm install
pnpm db:push
pnpm devThe app will be available at http://localhost:3738.
See .env.example for all available configuration options.
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | PostgreSQL connection string |
AUTH_SECRET |
Yes | Session signing secret |
OLLAMA_BASE_URL |
No | Ollama server URL (default: http://localhost:11434) |
LMSTUDIO_BASE_URL |
No | LM Studio server URL (default: http://localhost:1234) |
Cloud provider API keys can be configured through the Settings UI or via environment variables (OPENAI_API_KEY, GOOGLE_API_KEY, DEEPSEEK_API_KEY, OPENROUTER_API_KEY).
pnpm dev # Start dev server
pnpm build # Production build
pnpm preview # Preview production build
pnpm test # Run unit tests
pnpm lint # Check formatting and linting
pnpm format # Auto-fix formatting
pnpm check # Type checkingpnpm db:generate # Generate migration files
pnpm db:migrate # Run migrations
pnpm db:push # Push schema directly (dev)
pnpm db:studio # Open Drizzle Studio- Framework: SvelteKit + Svelte 5
- AI SDK: Vercel AI SDK v6
- Database: PostgreSQL + Drizzle ORM
- Auth: better-auth
- Styling: Tailwind CSS v4
- Testing: Vitest + Playwright
MIT
