DeerMes is a general-purpose AI agent project that combines a DeerFlow-inspired execution layer with a Hermes-inspired learning layer.
- Execution layer: coordinator, planner, tool-aware runtime, final synthesis.
- Learning layer: context files, persistent memory, reflection, operator profile.
- Runtime target: local models via Ollama, Anthropic via the native Messages API, or major OpenAI-compatible endpoints.
- Interaction layer: one-shot task runs and a terminal chat UI.
- Python-first backend under
src/deermes - Single-agent and DeerFlow-style execution modes
- Context loading for
AGENTS.md,SOUL.md,.cursorrules - JSONL memory store plus chat session transcripts
- Tool registry with shell and filesystem tools
- Provider abstraction with
echo,ollama,anthropic, and major OpenAI-compatible providers cursesterminal chat UI with persistent sessions- Config-driven permission profiles with sandbox roots and approval gates
Initialize DeerMes once with your preferred workspace, provider, model, and permission profile:
deermes initThen you can inspect the active defaults:
deermes doctorRun a one-shot task with the saved defaults:
deermes run "Inspect this repository and summarize the next engineering actions."deermes tuiYou can still override anything at launch time:
deermes tui --project-root ~/code/deermes --mode deerflow --provider ollama --model gemma4:31b-it-bf16 --base-url http://127.0.0.1:11435Useful commands inside the TUI:
/help/quit/mode single-agent|deerflow/provider PROVIDER_NAME/model MODEL_NAME/base-url URL/profile PROFILE_NAME/permissions/approve/deny/session SESSION_NAME/history N/raw/run/artifacts
Session transcripts are stored under .deermes/sessions/.
DeerMes also ships a repo-local launcher at bin/deermes. If ~/.local/bin is on your PATH, you can symlink it there and use deermes tui directly.
DeerMes now uses a user-level control config, separate from per-project runtime and permission files.
Common commands:
deermes init: create or update the user control configdeermes doctor: show the active workspace, provider profile, model, and permission defaultsdeermes config show: show the current control config summarydeermes config profiles: list provider profilesdeermes config set provider-profile NAME: switch the active provider profiledeermes config set project-root PATH: change the default workspacedeermes config set permission-profile PROFILE: change the default permission profiledeermes models: list models for the selected provider
Supported provider choices in deermes init are:
ollamaanthropicopenai-apiopenroutergeminigroqtogetherfireworksdeepseekxaiperplexitylmstudiocustom-openai-compatibleecho
OpenAI OAuth is not implemented yet; DeerMes currently supports API-key based providers plus local gateways. For other OpenAI-compatible servers such as LiteLLM or vLLM, use custom-openai-compatible.
Permission profiles are stored in deermes.permissions.json.
The runtime loads the default_profile unless you pass --permission-profile or switch profiles inside the TUI with /profile.
Each profile can define:
read_roots: paths the agent can read without leaving the sandbox.write_roots: paths the agent can write inside.allow_shell: whether shell access is enabled at all.allowed_commands: the shell command allowlist. Use"*"to allow any command.approval_required_for: actions that require interactive approval.
Supported approval tokens are:
readread_outside_rootswritewrite_outside_rootsshell
The config supports {project_root} and {home} path placeholders.
You can add, remove, or rename profiles freely as long as default_profile points to an existing entry.
This scaffold intentionally separates orchestration from learning state. That is the core fusion between DeerFlow and Hermes:
- DeerFlow contributes explicit workflow decomposition.
- Hermes contributes persistent, reusable agent context.
The next implementation step after this scaffold should be to tighten researcher stopping criteria and tool selection inside the DeerFlow path.
DeerMes is released under the MIT License. See LICENSE and THIRD_PARTY_NOTICES.md for project licensing and upstream attribution.