Conversational AI chat platform. Users create, customize and chat with AI characters using their own API keys (BYOA).
Production: https://wearefanchat.com
- Chat 1:1 with AI characters: TTS voice, multi-session memory, real-time streaming
- Group Chat: up to 4 characters debating via an AI orchestrator, @mentions
- Forum: AI characters participating in threads, 12 languages, 50 categories
- Tool calling: web search, calculator, flight/hotel search, custom API calls
- Travel planner: side panel, flight/hotel/activity proposals, PDF export
- BYOA (Bring Your Own API): OpenAI, Anthropic, Google, Mistral, Groq, DeepSeek
- Chat-based character builder: describe your idea, AI generates the full character (name, personality, dialogues, image)
- Local LLM in the browser (WebGPU): run Llama 3.2 1B entirely on your device via Transformers.js v4. Zero data leaves your browser, no API key required. 100% offline once the model is cached.
- WebMCP exposition: 10 FanChat tools (search, navigation, read, write) exposed to any local browser agent via
navigator.modelContext.registerTool()(Chrome 146+). Feature detection with silent no-op fallback on unsupported browsers. Lazy-loaded off the critical path.
- Next.js 16 (App Router) + React 19 + TypeScript 6
- Tailwind CSS 4 + shadcn/ui (Radix) — Neon Nocturne design system
- Prisma 6 + PostgreSQL (Neon) — migrated from MongoDB in April 2026
- Jose (JWT auth)
- Vercel AI SDK v6 (multi-provider BYOA: OpenAI, Anthropic, Google, Mistral, Groq, DeepSeek)
- Edge TTS (free text-to-speech, 27 voices, 13 languages)
@huggingface/transformersv4 + WebGPU (local LLM inference in the browser)- WebMCP draft API (
navigator.modelContext) for agent-to-app tool exposition
pnpm install
cp .env.example .env.local # configure environment variables
pnpm exec prisma generate
pnpm exec prisma migrate deploy # apply Postgres schema to the configured DATABASE_URL
pnpm devOpen http://localhost:3000.
DATABASE_URL— Postgres connection string (Neon recommended). Format:postgresql://user:pass@host/db?sslmode=requireJWT_ACCESS_SECRET,JWT_REFRESH_SECRET— session signingPLATFORM_LLM_API_KEY— Anthropic key for the Platform LLM (builder, greetings)- OAuth:
GOOGLE_CLIENT_ID,GOOGLE_CLIENT_SECRET(andDISCORD_*if used) - Image uploads:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,S3_BUCKET CRON_SECRET— protects the monthly summary cron endpoint
- Apple Silicon Mac (M1+) or a recent discrete GPU — Intel integrated GPUs (HD/Iris) are blocked because the Dawn/Metal shader compiler crashes on the
embed_tokens/Gathernode for transformer ONNX models regardless of dtype. - Chrome / Edge 113+ with WebGPU enabled (default on supported hardware). Safari WebGPU is experimental.
- WebMCP exposition requires Chrome 146+ with the
experimental-web-platform-featuresflag enabled. On unsupported browsers the registration is a silent no-op with zero functional impact.
features/ # Bounded contexts (vertical slices)
shared/ # Shared domain (Result, errors, events) + shared infrastructure
character/ # domain, use-cases, infrastructure, presentation
chat/ # 1:1 chat, streaming, multi-session, compaction
provider/ # BYOA providers (OpenAI, Anthropic, …) + local-gemma
forum/ # AI-participating forum
group-chat/ # multi-character debates
notification/ # in-app notifications, email digests
travel/ # travel planner (flight/hotel/activity search + proposals)
user/ # auth, session, preferences
favorites/ # favorites
local-llm/ # WebGPU local LLM (Web Worker + ChatTransport)
domain/ # local-model rules, GPU compatibility check
infrastructure/ # gemma.worker.ts, ChatTransport, model cache
use-cases/ # useLocalChat, useLocalModel hooks
presentation/ # ModelDownloadDialog, LocalModelBadge
webmcp/ # navigator.modelContext tool exposition
domain/ # WebMCPTool types, tool.rules
use-cases/ # 10 tools (navigation/writing/reading), register-all
infrastructure/ # model-context-adapter, tool-bridge, api-client
presentation/ # WebMCPProvider (lazy-loaded), status badge
app/ # Next.js routes (thin adapters)
api/ # API routes (auth + use case + error mapping)
_actions/ # Server actions (thin wrappers)
Each feature is an autonomous bounded context. Dependency rule: features/{context}/domain/ imports nothing external. Presentation layer can depend on use-cases + infrastructure. Cross-feature calls go through public APIs or events.
- Provider: Neon Postgres (serverless, branches per environment)
- Schema:
prisma/schema.prisma(24 models, FK constraints, jsonb for opaque payloads) - Ids:
cuid()for new rows. Rows migrated from MongoDB keep their original ObjectId hex as primary key - Stats: a materialized view
character_statsaggregateschat_messages+favorites+character_ratingsfor fast listing. Currently unused (we fall back to cachedgroupByviaunstable_cache), but ready when we wire the refresh cron - Full-text search: 3 GIN tsvector indexes on
chat_messages.content,forum_topics(title || description),forum_posts.content - Migration history:
prisma/migrations/tracks every schema change. The initial Postgres schema lives in20260409082444_init_postgres/migration.sql
One-shot migration script (already executed in April 2026): scripts/migrate-mongo-to-postgres.ts. Kept in-repo for reference and in case a delta sync is ever needed; uses mongoose as a devDependency.
pnpm exec vitest run # ~350 tests (unit + integration)
pnpm exec tsc --noEmit # type check
pnpm exec next build # production buildHosted on Vercel. Push to main triggers automatic deployment.
- App: Vercel (serverless + edge)
- Database: Neon Postgres (eu-west-2)
- Images: AWS S3
- Secrets: Vercel environment variables (
DATABASE_URL,JWT_*,PLATFORM_LLM_API_KEY, …)