Self-hosted knowledge retrieval service with an MCP-first API. Indexes your sources (code, docs, infra configs, tickets, wikis) and exposes them as retrieval tools to any MCP-compatible AI client — Claude Code, Cursor, Gemini, custom agents, or AI pipelines.
Retrieval-only: Omniscience returns chunks with citations, and the calling LLM synthesizes the answer. No opinionated chat, no embedded LLM, no vendor lock-in.
Pre-v0.1 — scaffolding. See docs/roadmap.md.
Get Omniscience running and connected to your AI client in three steps.
Step 1 — Start the stack
cat > .env << 'EOF'
POSTGRES_PASSWORD=change-me-strong-password
OMNISCIENCE_SECRET_KEY=change-me-32-char-secret-key-here
EOF
docker compose up -dWait for all services to become healthy, then verify:
curl http://localhost:8000/health
# {"status":"ok","version":"0.1.0"}Step 2 — Create an API token
docker compose exec app omniscience tokens create \
--name my-client \
--scopes search,sources:read
# Created token: sk_live_... (save this — shown once)Step 3 — Connect your AI client
Add this to your client's MCP config:
{
"mcpServers": {
"omniscience": {
"command": "omniscience",
"args": ["mcp", "serve", "--transport", "stdio"],
"env": {
"OMNISCIENCE_URL": "http://localhost:8000",
"OMNISCIENCE_TOKEN": "sk_live_..."
}
}
}
}Then ask your AI assistant a question — it will call omniscience.search and return grounded answers with citations.
| Client | Guide |
|---|---|
| Claude Code | docs/integrations/claude-code.md |
| Cursor | docs/integrations/cursor.md |
| Gemini CLI / SDK | docs/integrations/gemini.md |
| multiqlti pipelines | docs/integrations/multiqlti.md |
| Python (direct MCP client) | docs/integrations/python-client.md |
| LangGraph agents | docs/integrations/langgraph.md |
| CrewAI agents | docs/integrations/crewai.md |
| PydanticAI agents | docs/integrations/pydantic-ai.md |
- Vision — what Omniscience is and isn't
- Architecture — system overview
- Roadmap — milestones M0 → M6
- MCP API — tool contracts (primary interface)
- REST API — secondary interface
- Connector framework — how to add a source
- Database schema
- Freshness & lineage — trust model for AI clients
- Retrieval strategy (ADR 0004) — hybrid → structural → GraphRAG-if-needed
- Architecture decisions
Apache 2.0. See LICENSE.