Talk to your database in natural language. Ask questions, get SQL, tables, and charts back — like chatting with a data analyst.
Available as a web app (this repo) and as an npm package (@tiveor/chatdb) to embed in your own projects.
Add ChatDB to your existing project. Install from npm, set two env vars, and you're chatting with your database:
# In your project
npm install @tiveor/chatdb pg # or mysql2, or better-sqlite3# .env
DATABASE_URL=postgresql://user:pass@localhost:5432/mydb
OPENAI_API_KEY=sk-...// app.ts
import { ChatDB } from '@tiveor/chatdb'
const db = new ChatDB({
database: process.env.DATABASE_URL,
llm: { apiKey: process.env.OPENAI_API_KEY }
})
// One-off question
const result = await db.query('top 10 customers by revenue')
console.log(result.sql) // SELECT name, SUM(amount) AS total ...
console.log(result.explanation) // "Top 10 customers ranked by total revenue"
console.log(result.rows) // [{ name: "Acme", total: 50000 }, ...]
console.log(result.chartType) // "bar"
// Conversational — keeps context between questions
await db.ask('how many orders last month?')
await db.ask('and this month?') // understands "this month" from context
await db.ask('show me the trend') // references both previous answers
await db.close()Works with PostgreSQL, MySQL, and SQLite. Supports OpenAI, Anthropic, and any OpenAI-compatible API (Ollama, LM Studio, vLLM). Auto-detects everything from your connection string and API key.
Also available as a CLI:
npx @tiveor/chatdb -d postgresql://localhost/mydb -k sk-...
chatdb> how many users signed up this week?Full package docs: @tiveor/chatdb on npm
You: "How many orders were placed last month?"
|
v
[Hono API Server]
|
v
[@tiveor/chatdb SDK]
Schema (cached) + question → LLM → SQL + chart type
SQL Guard ← blocks anything that isn't SELECT
Database → executes query
|
v
Table + Chart + Explanation
- You type a question in plain language
- The server delegates to
@tiveor/chatdbwhich handles schema introspection, LLM prompting, SQL validation, and query execution - The LLM generates a
SELECTquery + picks a chart type - The SQL guard validates the query is read-only (blocks INSERT, DELETE, DROP, etc.)
- The query runs against your database with a 10s timeout and 1000 row limit
- You see the explanation, an auto-generated chart, and a data table
- Natural language queries — ask in any language, get SQL back
- Auto-visualization — bar, line, pie charts, big number stats, or tables based on data shape
- Schema-aware — switch between database schemas on the fly
- Read-only safety — SQL guard blocks all mutations, enforces LIMIT, 10s timeout
- Conversational context — follow-up questions work ("and what about February?")
- Context management — auto-trims history and schema to fit model context window
- Export — download results as CSV or JSON
- Debug panel — live logs showing model info, token usage, generated SQL, timing
- Works with any OpenAI-compatible API — LM Studio, Ollama, vLLM, text-generation-webui, etc.
| Layer | Tech |
|---|---|
| Frontend | React 19, Vite 7, Tailwind CSS 4, Recharts |
| Backend | Hono (Node adapter), powered by @tiveor/chatdb |
| Database | PostgreSQL, MySQL, SQLite (via pg, mysql2, better-sqlite3) |
| AI | OpenAI, Anthropic, or any OpenAI-compatible API (Ollama, LM Studio, vLLM) |
| npm Package | @tiveor/chatdb — zero deps, dual ESM+CJS |
| Monorepo | pnpm workspaces |
chatdb/
├── package.json # Root workspace
├── pnpm-workspace.yaml
├── .env # Your config (not committed)
├── .env.example
│
├── server/ # Thin HTTP wrapper around @tiveor/chatdb
│ └── src/
│ ├── index.ts # Hono server, CORS, ChatDB instance
│ ├── routes/
│ │ ├── chat.ts # POST /api/chat → chatdb.query()
│ │ └── schema.ts # GET /api/schema/* → chatdb.listSchemas/listTables/getSchema
│ └── types.ts
│
├── client/
│ └── src/
│ ├── App.tsx # Main layout, schema selector
│ ├── components/
│ │ ├── ChatWindow.tsx # Message container
│ │ ├── MessageBubble.tsx # User/AI message with data display
│ │ ├── ChatInput.tsx # Input with send
│ │ ├── DataTable.tsx # Query results table
│ │ ├── ChartRenderer.tsx # Auto bar/line/pie/number chart
│ │ ├── ExportButton.tsx # CSV/JSON export
│ │ └── LogPanel.tsx # Debug side panel
│ ├── hooks/
│ │ └── useChat.ts # Chat state management
│ ├── services/
│ │ └── api.ts # API client
│ └── types.ts
│
├── packages/
│ └── chatdb-sdk/ # @tiveor/chatdb npm package
│ ├── package.json
│ ├── tsup.config.ts # Dual ESM + CJS build
│ └── src/
│ ├── chatdb.ts # Main orchestrator
│ ├── llm/ # OpenAI, Anthropic, OpenAI-compatible providers
│ ├── db/ # PostgreSQL, MySQL, SQLite adapters
│ ├── prompt/ # Dialect-aware SQL prompt builder
│ ├── guard/ # SQL validation + LIMIT enforcement
│ ├── cli/ # Interactive REPL + single query mode
│ └── __tests__/ # 256 tests, 98.56% coverage
- Node.js >= 18
- pnpm >= 8 (
npm install -g pnpm) - PostgreSQL database — Supabase, Neon, local, any Postgres
- Local LLM server with OpenAI-compatible API — see LLM Setup
git clone https://github.com/tiveor/chatdb.git
cd chatdb
pnpm installcp .env.example .envEdit .env:
# Your Postgres connection string
DATABASE_URL=postgresql://postgres:password@db.your-project.supabase.co:5432/postgres
# Your LLM server endpoint
OLLAMA_URL=http://localhost:1234pnpm devThis starts both:
- Client at
http://localhost:5173 - Server at
http://localhost:3001
Open http://localhost:5173 and start chatting with your database.
ChatDB works with any server that exposes an OpenAI-compatible /v1/chat/completions endpoint. Here are some options:
- Download LM Studio
- Download a model (recommended:
qwen2.5-coder-7bordeepseek-coder-v2) - Go to Local Server tab, start the server
- Set
OLLAMA_URL=http://localhost:1234in your.env
Tip: Load the model with at least 4096 context length for best results. Smaller context = more aggressive schema/history trimming.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3
# Ollama runs on port 11434 by defaultSet OLLAMA_URL=http://localhost:11434 in your .env.
vllm serve Qwen/Qwen2.5-Coder-7B-Instruct --port 8000Set OLLAMA_URL=http://localhost:8000 in your .env.
If your LLM runs on a different machine (e.g., a GPU server):
OLLAMA_URL=http://192.168.0.21:1234Send a natural language message, get SQL results back.
Request:
{
"message": "How many users signed up this month?",
"schemaName": "public",
"history": [
{ "role": "user", "content": "previous question" },
{ "role": "assistant", "content": "previous answer", "data": { "sql": "..." } }
]
}Response:
{
"text": "There were 142 new signups this month.",
"data": {
"sql": "SELECT COUNT(*) AS signups FROM users WHERE created_at >= date_trunc('month', CURRENT_DATE) LIMIT 1000",
"explanation": "There were 142 new signups this month.",
"chartType": "number",
"columns": ["signups"],
"rows": [{ "signups": 142 }],
"rowCount": 1
},
"logs": [
"Schema: public",
"Schema text: 1234 chars",
"Model: qwen2.5-coder-7b-instruct",
"Duration: 2340ms"
]
}Returns all available database schemas.
Returns table names for a given schema.
Returns the full schema text (used internally for LLM context).
Clears the schema cache (5 min TTL by default).
Health check endpoint.
ChatDB is designed for read-only access to your database:
- SQL Guard validates every query before execution:
- Only
SELECTandWITH(CTEs) are allowed - Blocks:
INSERT,UPDATE,DELETE,DROP,ALTER,TRUNCATE,CREATE,GRANT,REVOKE,EXEC,COPY - Blocks multiple statements (
;)
- Only
- LIMIT 1000 is enforced on every query (added if missing)
- 10 second timeout per query
- Connection pooling with max 5 connections
- CORS restricted to the frontend origin
- Database credentials stay server-side only
Recommendation: Use a read-only database user/role for extra safety:
CREATE ROLE chatdb_reader WITH LOGIN PASSWORD 'your-password'; GRANT USAGE ON SCHEMA public TO chatdb_reader; GRANT SELECT ON ALL TABLES IN SCHEMA public TO chatdb_reader;
| Variable | Required | Default | Description |
|---|---|---|---|
DATABASE_URL |
Yes | — | PostgreSQL connection string |
OLLAMA_URL |
No | http://localhost:11434 |
OpenAI-compatible API endpoint |
OLLAMA_MODEL |
No | Auto-detected | Force a specific model name |
PORT |
No | 3001 |
Server port |
# Run both client and server in dev mode
pnpm dev
# Run only the server
pnpm dev:server
# Run only the client
pnpm dev:client
# Build the client for production
pnpm --filter client buildYour DATABASE_URL is wrong or not set. Make sure the .env file is in the project root (not inside server/ or client/).
Your LLM server doesn't support structured output. Make sure you're using a recent version of LM Studio, Ollama, or vLLM.
Your model is loaded with a very small context window. In LM Studio, increase the context length to at least 4096 tokens in the model settings. ChatDB auto-detects the context length and trims schema/history to fit.
Make sure you select the correct schema from the dropdown in the header. The schema name is passed both to the AI prompt and to SET search_path before query execution.
Charts only render when the AI returns a chartType other than table and the data has at least one numeric column. Ask questions that produce numeric results (counts, sums, averages).
Contributions are welcome! Please open an issue first to discuss what you'd like to change.
- Fork the repo
- Create your branch (
git checkout -b feature/my-feature) - Commit your changes
- Push to the branch (
git push origin feature/my-feature) - Open a Pull Request
MIT - Alvaro Orellana
