Open-source multi-agent intelligence platform — Web UI + CLI for research, project management, and software automation.
Kendr is a Python runtime that combines specialized AI agents, a web-based chat and project UI, multi-source research, durable memory, and structured run artifacts. Use it from the browser or the terminal.
Quickstart · CLI Reference · Configuration · Integrations · Examples
- Python 3.10 or newer — python.org/downloads
- Git — git-scm.com
- An OpenAI API key (or Anthropic / Google / local Ollama — see LLM Providers below)
git clone https://github.com/kendr-ai/kendr.git
cd kendr
./scripts/install.shAfter it finishes, reload your shell and run kendr --help to confirm the install:
source ~/.zshrc # zsh (macOS default)
# or
source ~/.bashrc # bash (Linux default)
kendr --helpOpen PowerShell (not CMD) and run:
git clone https://github.com/kendr-ai/kendr.git
cd kendr
powershell -ExecutionPolicy Bypass -File .\scripts\install.ps1Open a new terminal after the script finishes, then verify:
kendr --helpIf you prefer to control the environment yourself:
git clone https://github.com/kendr-ai/kendr.git
cd kendr
# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate # macOS/Linux
# .venv\Scripts\activate # Windows PowerShell
# Install kendr
pip install -e .
# Verify
kendr --helpIf you just want the package without cloning the repo:
pip install kendr-runtime
kendr --helpNote: Running
kendr uifrom a pip install requires the repo to be present for the HTML assets. Clone + install is recommended for the full experience.
Kendr ships with OpenAI by default. Install optional packages to add more providers:
| Provider | Install command | Models |
|---|---|---|
| OpenAI (default) | included | GPT-4o, GPT-4o-mini, o1, o3 |
| Anthropic Claude | pip install 'kendr-runtime[anthropic]' |
claude-3-5-sonnet, claude-opus |
| Google Gemini | pip install 'kendr-runtime[google]' |
gemini-2.0-flash, gemini-1.5-pro |
| Local Ollama | pip install 'kendr-runtime[ollama]' |
llama3, mistral, phi3, … |
| All of the above | pip install 'kendr-runtime[full]' |
everything |
Or use the install script with --full:
./scripts/install.sh --full # macOS/Linux
.\scripts\install.ps1 -Full # WindowsAfter installing, set two required values:
# Your LLM API key
kendr setup set openai OPENAI_API_KEY sk-...
# Where kendr writes output files
kendr setup set core_runtime KENDR_WORKING_DIR ~/kendr-work
# Check everything is configured
kendr setup statusOr copy .env.example to .env and fill in the values manually.
kendr ui
# or the shorter alias:
kendr webOpens the web interface at http://localhost:5000 with:
- Chat — multi-agent chat with streaming output and plan cards
- Projects — open any local code project, chat with an AI that understands your codebase, manage files, run terminal commands, view git status
- Setup & Config — configure API keys and LLM providers in the browser
- Run History — browse every past run and its output artifacts
- LLM Models — view and switch between available models
# Research
kendr run "Analyse the AI chip market: key players, supply chain, investment outlook."
kendr research --sources arxiv,web --pages 15 "Advances in LLM reasoning 2024"
# Software project generation
kendr generate --stack fastapi_postgres "Task management API with auth and tests."
kendr generate --stack nextjs_prisma_postgres "Blog platform with markdown and auth."
# SuperRAG knowledge sessions
kendr run --superrag-mode build --superrag-new-session --superrag-session-title "docs" \
--superrag-path ./docs "Index my documentation."
kendr run --superrag-mode chat --superrag-session docs "What are the install requirements?"
# Shell command execution (with approval gate)
kendr run --os-command "df -h" --os-shell bash --privileged-approved "Check disk usage."
# Gateway (required for communication integrations and REST API)
kendr gateway start
kendr gateway status
kendr gateway stop| # | Capability | Entry point | Status |
|---|---|---|---|
| 1 | Web UI — chat, projects, config, history | kendr ui |
Stable |
| 2 | Deep research + document generation | kendr run / kendr research |
Stable |
| 3 | Multi-agent project generation | kendr generate |
Beta |
| 4 | SuperRAG knowledge engine | kendr run --superrag-mode |
Stable |
| 5 | Local command execution | kendr run --os-command |
Beta |
| 6 | Unified communication suite | kendr run --communication-authorized |
Beta |
| Status | Areas |
|---|---|
| Stable | Web UI, core CLI, setup-aware routing, SuperRAG sessions, research synthesis, local-drive intelligence |
| Beta | Project generation, long-document pipeline, gateway HTTP surface, communication suite, AWS workflows |
| Experimental | Dynamic agent factory, generated agents, voice/audio workflows |
kendr ui
# → http://localhost:5000The web interface gives you a full project workspace:
- Open any local Git repository and chat with the AI about your code
- Auto-generates a
kendr.mdproject context file if one doesn't exist - Model selector and live context-window usage bar in the chat input
- File browser, integrated terminal, and git status panel
- Recent chat history per project
Multi-source research that synthesises web, academic, patent, and local evidence into structured reports.
kendr run --current-folder \
"Create an intelligence brief on Stripe: business model, competitors, strategy, risks."
kendr run --long-document --long-document-pages 50 \
"Produce an exhaustive global gold market investment report."Blueprint → scaffold → build → test → verify → zip export, fully automated.
kendr generate --stack fastapi_react_postgres \
"SaaS starter with billing, admin dashboard, and CI/CD."Available stacks: fastapi_postgres, fastapi_react_postgres, nextjs_prisma_postgres, express_prisma_postgres, mern_microservices_mongodb, pern_postgres, nextjs_static_site, django_react_postgres, custom_freeform.
Zero-config vector search over local files, URLs, databases, and OneDrive content.
kendr run --superrag-mode build --superrag-new-session \
--superrag-session-title "product_docs" --superrag-path ./docs \
"Index our documentation."
kendr run --superrag-mode chat --superrag-session product_docs \
--superrag-chat "What are the installation requirements?"- Quickstart — install, configure, first run
- CLI Reference — every subcommand and flag
- Configuration — every environment variable
- Integrations — vector backends, communication providers, security tools
- Agents — workflow families and the full agent inventory
- Architecture — runtime flow, discovery, persistence
- Security — safety boundaries and privileged controls
- Examples — copy-paste CLI examples for every workflow
- Troubleshooting — common first-run issues
Contribution baseline:
- keep changes grounded in real code and verified workflows
- preserve setup-aware gating and runtime behaviour unless fixing a bug
- update tests and docs with user-facing changes
- run
python scripts/verify.pybefore opening a PR
- Join the Kendr Discord community for live chat, collaboration, Q&A, and announcements: https://discord.gg/GgU8UEdn
- Track ongoing work, open issues/feedback, and CI results on the GitHub project view
- The badges above pull live GitHub metrics (stars, forks, watchers, issues, workflows + more) so you can see how the project is performing in real time.
Names and avatars shown above are generated automatically from GitHub contributions, so the list reflects the latest community participation without manual updates.
- multi-agent orchestration runtime in
kendr/runtime.py - web UI server (chat + project workspace) in
kendr/ui_server.py - CLI entrypoint in
kendr/cli.py - dynamic agent registry and discovery in
kendr/discovery.py - multi-provider LLM routing in
kendr/llm_router.py - project context management in
kendr/project_context.py - rich terminal output in
kendr/cli_output.py - setup and integration catalog in
kendr/setup/ - durable SQLite persistence in
kendr/persistence/ - multi-source research infrastructure in
tasks/research_infra.py - optional HTTP gateway in
kendr/gateway_server.py - MCP server endpoints in
mcp_servers/