Agentic team orchestration via GitHub.
Loreli is an MCP server that coordinates AI agents using GitHub's native collaboration primitives — issues for tasks, pull requests for deliverables, discussions for plans, comments for communication, and reviews for quality gates. The name combines Lore (creating project lore through GitHub artifacts) and CLI (exposed as a command-line tool via MCP).
This guide walks you through setting up Loreli from scratch — from creating a GitHub token to watching your first agent team work on an issue.
Before installing Loreli, make sure you have the following:
| Requirement | How to check | Install |
|---|---|---|
| macOS or Linux | uname -s |
Windows is not supported |
| Node.js >= 24 | node --version |
nvm install 24 or brew install node@24 |
| tmux | tmux -V |
brew install tmux (macOS) or apt install tmux (Linux) |
| At least one agent backend | See table below | Depends on provider |
tmux is required for all agent backends (Claude, Codex, Cursor). If tmux is missing, loreli mcp exits immediately with install instructions.
Loreli supports macOS and Linux only. Windows is not supported.
Agent backend CLIs — install one or more depending on which AI providers you want to use:
| Backend | Binary | Install | Provider |
|---|---|---|---|
| Claude | claude |
Anthropic CLI | Anthropic |
| Codex | codex |
OpenAI Codex CLI | OpenAI |
| Cursor | cursor-agent |
Cursor Agent | Multi-provider |
Loreli auto-discovers which backends are available on your PATH at startup.
Loreli needs a GitHub Personal Access Token (PAT) with permission to manage issues, pull requests, discussions, labels, and repository contents.
Create a classic PAT:
- Go to github.com/settings/tokens
- Click Generate new token > Generate new token (classic)
- Give it a descriptive name (e.g.
loreli-orchestration) - Set an expiration (or "No expiration" for long-running setups)
- Under Select scopes, check
repo— this single scope covers everything Loreli needs: issues, PRs, labels, discussions, reviews, repo contents, and merge - Click Generate token and copy it immediately — GitHub will not show it again
Set the token in your environment:
export GITHUB_TOKEN=ghp_your_token_hereTo persist it, add the export to your shell profile (~/.zshrc, ~/.bashrc) or create a .env file in the project root. Never commit tokens to version control.
🔐 Secret handling — Personal access tokens carry the same blast radius as your GitHub account. Keep them in environment variables or a secret manager, scope them to the minimum permissions, and rotate them regularly. See GitHub’s “Keeping your API credentials secure” guidance for the official checklist.
Using a fine-grained token instead
Fine-grained PATs work but require enabling each permission explicitly. Go to github.com/settings/personal-access-tokens/new and grant Read and Write access for:
- Contents — reading/writing repo files
- Issues — creating/managing issues and comments
- Pull requests — creating PRs, requesting reviews, merging
- Discussions — creating plan discussions and comments
- Metadata — read-only (required by GitHub for all fine-grained tokens)
Set the Resource owner and Repository access to match the repos you plan to orchestrate.
Run the command below to install the loreli CLI globally so your MCP client can start the server; the install should complete without npm errors.
npm install -g loreliHold off on running loreli commands until you've completed Step 4, because the CLI relies on the MCP server entry being configured.
Loreli runs as an MCP server over stdio. Your IDE or MCP client needs a config entry that tells it how to start Loreli. Add this to your global (user-level) MCP settings so the token stays out of project files.
Before editing any config file:
- Export
GITHUB_TOKENin the shell or source it from a local.envso the value never lands in Git history. - Use your client’s environment forwarding rather than pasting literals. Cursor and VS Code expand
${env:NAME}placeholders insidemcpServersblocks, while Claude’s.mcp.jsonhonors${NAME}tokens per the Cursor MCP docs, VS Code variable reference, and Claude Code MCP guide. - Keep repo-level
loreli.ymland workspace configs secret-free; only user-level config or wrapper scripts should reference tokens.
Cursor / VS Code — open Settings > search MCP > Edit in settings.json, or add directly to your user-level ~/.cursor/mcp.json:
{
"mcpServers": {
"loreli": {
"command": "npx",
"args": ["loreli", "mcp"],
"env": {
"GITHUB_TOKEN": "${env:GITHUB_TOKEN}"
}
}
}
}Cursor and VS Code both expand ${env:NAME} placeholders, so this entry reads the token you already exported (export GITHUB_TOKEN=...) without storing it in the config. Cursor additionally supports an envFile attribute if you prefer pointing at a dedicated .env file that lives outside your repos (see the Cursor MCP docs for the full schema).
Claude Code (CLI) — add to your user-level config at ~/.claude/.mcp.json:
{
"mcpServers": {
"loreli": {
"command": "npx",
"args": ["loreli", "mcp"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}The env block keeps the token scoped to Loreli's process, and Claude expands ${VAR} placeholders at load time per the Claude Code MCP guide, so you only need to manage the token in your shell. Alternatively, if GITHUB_TOKEN is already exported in your shell profile, you can omit the env block entirely — Loreli reads process.env.GITHUB_TOKEN at startup.
Claude Desktop
Add to your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"loreli": {
"command": "npx",
"args": ["loreli", "mcp"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
}
}
}Claude Desktop shares the same .mcp.json schema as the CLI, so ${GITHUB_TOKEN} resolves against the environment variable available when the desktop app launches. Keep the variable in your login shell or macOS Keychain-backed launch script instead of copying the literal token into this file.
Codex CLI
Add to ~/.codex/config.toml (user-level):
[mcp_servers.loreli]
command = "npx"
args = ["loreli", "mcp"]
env_vars = ["GITHUB_TOKEN"]Codex only forwards a short built-in whitelist of environment variables to STDIO MCP servers; anything sensitive like GITHUB_TOKEN must be explicitly listed in env_vars so the CLI copies it from your shell into the Loreli process without writing the value to disk. Once you add the line above and export GITHUB_TOKEN in your shell, Codex will pass it through on launch per the Codex config reference and docs/example-config.
To confirm your MCP configuration is discoverable by the CLI, run the commands below in a shell where GITHUB_TOKEN is set. This matters because the CLI resolves tool calls through the MCP server entry, and without it even loreli --version and loreli tools list will fail. You should see the version string and a list of available tools.
loreli --version
loreli tools list # should print all available MCP toolsBefore Loreli can orchestrate work on a repo, three things need to be in place: GitHub Discussions enabled, a "Loreli" discussion category, and a start run.
Step 1: Enable GitHub Discussions
- Go to your repository on GitHub
- Navigate to Settings > General
- Scroll down to the Features section
- Check the Discussions checkbox
Step 2: Create the "Loreli" discussion category
Loreli uses a dedicated discussion category for planning. GitHub's API does not support creating discussion categories, so this is the one manual step that cannot be automated.
- Go to your repository's Discussions tab
- Click the pencil icon next to Categories in the left sidebar
- Click New category
- Fill in the form:
- Category name:
Loreli(exact spelling, case-sensitive) - Description:
Loreli agent orchestration plans - Discussion Format: Open-ended discussion
- Category name:
- Click Create
Step 3: Start the repository
With your MCP client connected, call the start tool:
start({ repo: "owner/repo-name" })
Or from the CLI:
loreli tools start --repo owner/repo-nameStart is idempotent — safe to re-run at any time. It discovers existing files and only creates what's missing. Here's what it does:
| Action | Details |
|---|---|
| Scaffolds templates | PR template, issue templates, loreli.yml config |
| Configures MCP | Adds loreli server entry to .mcp.json, .cursor/mcp.json, .codex/config.toml |
| Creates labels | loreli, loreli:planner, loreli:action, loreli:reviewer, loreli:approved, loreli:changes-requested, plus per-provider labels |
| Adds dependency | Ensures loreli is in devDependencies of package.json |
| Updates .gitignore | Adds rules for .loreli/, .claude/, .cursor/hooks.json |
| Starts reactor | Begins the orchestration loop that watches for work |
After start, you'll see a summary of the session, detected backends, review strategy, and any files that were scaffolded.
With everything set up, here's the quickest path to seeing Loreli in action:
Create a labeled issue on your repository with the loreli label. The issue body should describe a task — for example:
Title: Add a hello world endpoint
Body: Create a simple
/helloendpoint that returns{ "message": "Hello, world!" }. Add a basic test.Label:
loreli
Watch the system react. The reactor loop (every 60 seconds) detects the unclaimed issue, spawns an action agent, and assigns it. Use team_status to monitor progress:
team_status()
The action agent claims the issue, creates a branch, implements the work, and opens a PR. Loreli then auto-spawns a reviewer. In dual-side environments, review is cross-provider (yin vs yang). In single-side environments, Loreli falls back to same-provider review with distinct action/reviewer identities. After approval, the PR is either auto-merged or handed to human reviewers (depending on your loreli.yml configuration).
For planning-driven work, use start_planning with an objective instead — Loreli will create plan discussions, run adversarial review, and promote approved plans into issues that enter the same reactive loop.
This README is intentionally focused on operating Loreli as a consumer.
For internal architecture, subpackage import docs (loreli/*), and contributor/development workflows, see packages/README.md.
For topology-aware end-to-end testing (pnpm e2e, pnpm e2e:single, pnpm e2e:multi), see e2e/README.md.
Start starts the reactor loop automatically. Once started, the system is fully reactive — agents are spawned on demand when work appears on GitHub and reaped when idle. Any machine running a Loreli MCP is part of the distributed system.
sequenceDiagram
participant User as User / Parent Agent
participant L as Loreli MCP
participant Orch as Orchestrator
participant A as Action Agent
participant R as Review Agent
participant GH as GitHub
User->>L: start(repo)
L->>GH: discover + scaffold
Note over Orch: Reactor starts (60s tick)
User->>GH: Create issue with loreli label
Note over Orch: Tick detects unclaimed issue
Orch->>A: Auto-spawn action agent
A->>GH: Claim issue + create PR
Orch->>R: Auto-spawn reviewer (opposing side when available)
R->>GH: Review PR (adversarial)
A->>GH: Address feedback
R->>GH: Approve + sign off
A->>GH: Sign off
alt auto-merge
Orch->>GH: Merge PR
else HITL
L->>GH: Request human review
Orch->>Orch: Kill agents
end
Note over Orch: Reap idle agents
Note over Orch: Next tick — repeat
- Start — Point Loreli at a GitHub repo. It discovers existing files, scaffolds any missing ones, and starts the reactor loop. This is idempotent — safe to re-run. Starting the MCP means joining the distributed system.
- Reactive dispatch — The reactor tick (every 60s) checks for unclaimed
loreli-labeled issues. When work is found and no agents are available, an action agent is auto-spawned. Humans or agents can calladd_agentfor additional capacity. - Action — Action agents claim issues (first-comment-wins), create git worktree branches, implement work, and open PRs.
- Review — Reviewers are auto-enlisted when PRs appear. With dual-side capability, Loreli enforces cross-provider review. With single-side capability, Loreli allows same-provider review using a distinct reviewer identity.
- Human In The Loop (HITL) (optional) — If
reviewersare configured inloreli.yml, PRs are handed off to human reviewers after agent approval. Agents are shut down while awaiting human decision. - Merge — Approval from an eligible reviewer triggers auto-merge, or human merges manually.
- Reap — When no open issues or PRs remain, idle agents are killed. The cycle repeats on the next tick if new work arrives.
Planning is an explicit step — call start_planning(objective) with planner agents to create plan discussions. Plans are reviewed, revised, and promoted to issues, which then enter the reactive dispatch loop above.
When an agent's process exits — whether from completion, crash, or forced kill — Loreli captures the terminal output before destroying the tmux pane. These snapshots are written to the session's log directory:
~/.loreli/sessions/<sessionId>/logs/
optimus-0.death.log
megatron-0.death.log
All interactive agents have remain-on-exit set on their tmux pane automatically, so output survives after the process exits. The orchestrator calls snapshot() before agent.stop() in every exit path (reconcile, kill, shutdown).
Death snapshots are cleaned up by Storage.prune() alongside other session data when the session expires (default: 12 hours, configurable via cleanup.retention).
Loreli supports multiple agent backends. The BackendRegistry auto-discovers which are available at startup, including available models for backends that support runtime discovery.
| Backend | CLI Binary | Provider | Model Discovery |
|---|---|---|---|
| Claude | claude |
Anthropic | Proxy listing (/v1/models or /models) when ANTHROPIC_BASE_URL is configured; otherwise static defaults |
| Cursor | cursor-agent |
Multi-provider | --list-models with auto tier classification |
| Codex | codex |
OpenAI | Proxy listing (/v1/models or /models) when OPENAI_BASE_URL is configured; otherwise static defaults |
Model aliases (fast, balanced, powerful) resolve through: config override > runtime discovery > static defaults > pass-through. See packages/agent/README.md for the full resolution chain and LiteLLM/proxy override docs.
Loreli derives review strategy from detected side capability at runtime:
- Dual-side (yin + yang detected): cross-provider review and merge gating.
- Single-side (only yin or only yang detected): fresh-instance same-provider review with distinct identities.
Each target repository can have a loreli.yml in its root. If absent, start scaffolds a default one with all options documented.
# loreli.yml — key settings
theme: transformers # string or list of themes (randomized per work item)
# theme: # use a list to randomize:
# - transformers
# - pokemon
# - marvel
reviewers: [] # GitHub usernames; empty = auto-merge
merge:
method: squash # squash | merge | rebase
hitl: false # false = auto-merge, true = human reviewers
base: loreli # agents PR against this branch, not main
pr:
validation:
command: npm test # default pre-PR command; blocks pr/create on failure
selfReview:
enabled: true # default: pr/create requires preview + confirm=true
model: balanced # fast | balanced | powerful (global fallback)
labels:
track: true # enable provider/model label tracking
extra: [] # additional labels applied to all loreli items
timeouts:
stall: "10m" # human-readable before agent is considered stalled
shutdown: "1m" # graceful shutdown before kill
poll: "2s" # orchestrator poll interval
rapidDeath: "15s" # startup crash detection window
proxyDiscovery: "5s" # HTTP timeout for proxy model discovery
nudge: true # enable tier-1 stall nudge messages
watch:
interval: "60s" # reactor tick interval
maxRounds: 7 # max review rounds before escalation
trace:
enabled: true # include agent trace blocks in PR bodies/reviews
includeOutput: true # include captured terminal output in trace
maxOutputChars: 8000 # truncate output beyond this limit
agents:
disallowedTools: # CLI tools denied in agent workspaces
- gh
- curl
workflows: # per-role model, scaling, trace, prompt
action:
model: balanced
maxAgents: 3
# prompt: .loreli/action.md # optional custom prompt per role
reviewer:
model: balanced
maxAgents: 2
trace:
enabled: true
maxOutputChars: 4000
risk:
model: fast
maxAgents: 3
skip: false
trace:
enabled: true
maxOutputChars: 2000
planner:
model: powerful
maxAgents: 1
trace:
enabled: true
maxOutputChars: 4000pr.validation.command and pr.selfReview.enabled harden PR creation quality gates and are enabled by default:
pr.validation.commanddefaults tonpm test; customize it per repository when needed.pr.selfReview.enableddefaults totrue; set it tofalseonly when explicitly opting out.
Config values are resolved through four layers (highest priority first):
flowchart LR
BP["Start Params"] --> YML["loreli.yml"]
YML --> ENV["Env Vars"]
ENV --> DEF["Built-in Defaults"]
See packages/config/README.md for the full schema and API reference.
Loreli supports redirecting all agent work to a dedicated branch, keeping main untouched until a human is ready to promote.
main ─────────────────────────────────────── human review ──▶ main
│ ▲
└──▶ loreli ──▶ agent PRs merge here ──────────────┘
│ ▲ ▲
├── agent-0/issue-1 ───┘ (auto-merge)
└── agent-1/issue-2 ───┘ (auto-merge)
Set merge.base in loreli.yml to enable:
merge:
base: loreli # agents PR against this branch, not mainThe scaffolding template already defaults to loreli. Repositories that omit merge.base or explicitly set it to main retain the current behavior — agents PR directly against main.
When enabled, the setting flows through every system boundary: workspace resets branch from origin/loreli instead of origin/main, action agents open PRs targeting loreli, reviewer workspaces sync loreli alongside the PR head, and the relay auto-PR path uses the configured base. A single human-created PR from loreli → main rolls up all agent work for final review.
See packages/config/README.md for the full technical details including resolution order and affected code paths.
Projects can inject custom instructions into agent prompts by setting workflows.{role}.prompt in loreli.yml. Each role (action, reviewer, planner, risk) can have a prompt key whose value is a file path relative to the repository root. Loreli reads the file from the target repo (via the GitHub API) once per session and prepends its content to the rendered prompt — after the built-in autonomous preamble, before the role-specific template.
This mechanism addresses a common need: different repositories have different coding standards, architectural constraints, or domain knowledge that agents should follow. Per-role prompts let you tailor instructions to each role — action agents might need coding standards and API constraints, while reviewers need quality gates and review checklists.
Create prompt files in your repository — for example under .loreli/:
<!-- .loreli/action.md -->
<project-rules>
- This project uses the internal Design System v3 — import components from `@acme/ds3`.
- All API calls must go through the gateway at `https://api.internal.acme.com`.
- Never use inline styles; use CSS modules exclusively.
- Database migrations require a paired rollback script in `db/rollback/`.
</project-rules><!-- .loreli/review.md -->
<review-standards>
- Every PR must have at least 90% test coverage on changed files.
- Flag any direct database queries — all access must go through the ORM.
- Reject PRs that introduce new `any` types in TypeScript.
</review-standards>Then reference them in loreli.yml under workflows:
workflows:
action:
prompt: .loreli/action.md
reviewer:
prompt: .loreli/review.md
planner:
prompt: .loreli/planner.md
risk:
prompt: .loreli/risk.md # optional — omit roles that don't need custom promptsAfter the next start(), each agent role receives only its own custom instructions at the top of its prompt, before the role-specific template content. Roles without a configured prompt file are unaffected.
The custom prompt flows through the same rendering pipeline as the autonomous preamble — Workflow.render() and Workflow.renderFrom() resolve workflows.{role}.prompt from config and prepend the content automatically. The injection order is:
- Autonomous preamble — built-in headless operation directives (always present)
- Custom prompt — role-specific project instructions from
workflows.{role}.prompt(when configured) - Role template — the action/planner/reviewer prompt
The file is read once from GitHub when the first prompt for that role is rendered and cached for the remainder of the session. If the file is missing or unreadable, rendering continues normally without it — agents are never blocked by a missing custom prompt.
The custom prompt is static text — it is not processed through Mustache templates. Tailor each role's prompt to its responsibilities:
- Action prompts: coding standards, architecture constraints, API endpoints, forbidden patterns, required tooling, domain terminology
- Reviewer prompts: quality gates, review checklists, coverage requirements, security policies, patterns to flag
- Planner prompts: planning methodology, decomposition guidelines, estimation rules, scope constraints
- Risk prompts: threat models, compliance requirements, security baselines, risk tolerance thresholds
Wrap content in descriptive XML tags (e.g. <project-rules>, <review-standards>) so agents can clearly identify the boundary between project instructions and role instructions.
When reviewers is non-empty or merge.hitl is true, Loreli activates Human In The Loop (HITL). Loreli intentionally diverges from the Harness Engineering article's non-blocking philosophy: HITL provides a configurable safety net for teams that want human approval before merge, while hitl: false enables fully autonomous agent merges when confidence is high.
sequenceDiagram
participant Agents as Agent Team
participant GH as GitHub PR
participant Orch as Orchestrator
participant Human as Human Reviewer
Agents->>GH: Open PR + review + approve
Orch->>GH: Request review + assign humans
Orch->>GH: Post summary @tagging reviewers
Orch->>Orch: Kill agents
Note over GH: awaiting_hitl
Human->>GH: Merge OR comment
alt human merges
Note over GH: Done
else human comments
Orch->>Orch: Spawn fresh agent
Agents->>GH: Address feedback
Orch->>GH: Re-request review
end
Loreli pairs agents from opposing AI providers for adversarial review. OpenAI agents create work that Anthropic agents review, and vice versa. Themed identities make agent teams visually distinct — each theme has a yang faction (mapped to OpenAI) and a yin faction (mapped to Anthropic).
| Theme | Yang (OpenAI) | Yin (Anthropic) | Council |
|---|---|---|---|
| Transformers | Autobots | Decepticons | The Allspark |
| Pokemon | Fire types | Water types | Professor Oak |
| Marvel | Avengers | X-Men | S.H.I.E.L.D. |
| Digimon | Vaccine | Virus | Yggdrasil |
| Star Wars | Jedi | Sith | The Force |
| Lord of the Rings | Fellowship | Mordor | The Valar |
| Dragon Ball | Z Fighters | Villains | Grand Zeno |
| Avatar | Benders | Fire Nation | Raava |
| Zelda | Hyrule | Shadow | Hylia |
Each theme also has a council — a central authority that transcends the yang/yin split. The council identity is used for orchestrator-level actions like proof-of-life requests, where messages should come from the system itself rather than a faction.
When theme is set to a list of themes in loreli.yml, Loreli picks one at random for each work item. All agents within that work item (action + reviewer, planner + reviewer) share the same theme so antagonist pairs remain coherent -- Autobots always face Decepticons, never Team Rocket.
When cross-provider review isn't possible, the system falls back to single-side fresh-instance review while keeping action and reviewer identities distinct.
Multiple Loreli MCP instances can run against the same repository simultaneously. GitHub is the single source of truth — there are no local coordination files or locks. The proof-of-life system prevents duplicate work when an orchestrator restarts or crashes.
When orchestrator A crashes while its agents hold claims on issues or PRs, orchestrator B (started later or running in parallel) sees those claims but has no local record of the agents. Without coordination, B would either:
- Evict immediately — releasing claims and causing duplicate work if A's agents are still alive
- Wait forever — never reclaiming work from genuinely dead agents
Loreli uses an on-demand request-response protocol via GitHub comments to verify agent liveness across orchestrators. No constant heartbeat — requests are only posted when a foreign claim is detected.
sequenceDiagram
participant B as Orchestrator B
participant GH as GitHub
participant A as Orchestrator A
B->>GH: Detects claim by agent unknown to B
Note over B: Check recent GitHub activity first
alt recent activity found
Note over B: Skip — agent is probably alive
else no recent activity
B->>GH: Post themed proof-of-life request (council identity)
Note over A: Reactor tick detects request
A->>A: Run health(agent) check
A->>GH: Post themed alive response (agent identity)
B->>GH: Read alive response on next tick
alt alive response found
Note over B: Skip — agent confirmed alive
else no response within timeout
B->>GH: Release claim, dispatch new agent
end
end
Both request and response comments are themed. Requests use the theme's council identity — a central authority that transcends the yang/yin faction split (e.g. "The Allspark" for Transformers, "Professor Oak" for Pokemon). Responses use the target agent's own faction identity. Machine-readable markers are embedded alongside the visible themed text.
Before evicting a foreign agent's claim, the detecting workflow runs four checks in order:
- Local removal check — Was this agent killed by our own stall detection? If yes, evict immediately (no network call needed).
- Own request check — Is there a pending proof-of-life request from this orchestrator (matched by
sessionId)? If yes, check for a response or wait for the timeout. - Any request check — Is there a request from a different orchestrator? If it has a valid response, the agent is alive. If it expired, post a fresh request from this orchestrator instead of evicting — the agent deserves a chance to respond to the new owner.
- Recent activity check — Has anyone commented within the
proofOfLife.timeoutwindow? If yes, the agent is likely alive. - Post request — No activity and no pending request → post a new proof-of-life request and wait until the next reactor tick.
Every eviction posts a review-release (or release for action claims) marker comment. This is critical for preventing the hydrate-evict loop: without it, hydrate() would re-discover the old claim comment on the next tick, re-add it to the in-memory map, and trigger eviction again — forever. The hydrate() step checks for release markers after claims and skips them.
The health() method on the orchestrator evaluates multiple signals when responding to a proof-of-life request:
| Signal | Check | Result |
|---|---|---|
| Process | Is the tmux pane alive? | unhealthy if dead |
| State | Is the agent dormant? | unhealthy if dormant |
| Activity | Last orchestrator interaction within stall timeout? | unhealthy if stale |
| Output | Captured terminal output length | Included for diagnostics |
| Key | Default | Description |
|---|---|---|
proofOfLife.timeout |
5m |
How long to wait for an alive response before evicting |
# loreli.yml
proofOfLife:
timeout: 5m # time window for proof-of-life responsesEvery agent-created artifact (issue, PR, or discussion) gets labeled for tracking and attribution:
| Label | Example | Description |
|---|---|---|
loreli |
loreli |
Marks all Loreli-managed items |
loreli:<provider> |
loreli:anthropic |
AI provider that created the artifact |
model:<name> |
model:claude-sonnet-4 |
Human-readable model name |
loreli:<role> |
loreli:action |
Agent role |
loreli:approved |
loreli:approved |
Plan or PR approved |
loreli:changes-requested |
loreli:changes-requested |
Revision required |
Start creates base labels (loreli, loreli:planner, loreli:action, loreli:reviewer, etc.), and add_agent ensures model-specific labels exist. Set labels.track: false in loreli.yml to disable label tracking. Use labels.extra to add custom labels to all agent artifacts.
Loreli uses HTML comment markers (<!-- loreli:TYPE key="value" -->) to embed machine-readable metadata in GitHub content without affecting the visible presentation. Marker types include claim, signoff, release, gate, agent, signature, review-event, feedback, promotion, and trace/trace-end. This decouples workflow state from human-visible text — markers survive edits to the visible body and are used by the orchestrator to detect claims, sign-offs, and HITL transitions. The trace paired markers wrap collapsible agent reasoning and output blocks in PR bodies — visible to humans on GitHub but automatically stripped before reaching reviewer prompts or the read tool. See packages/marker/README.md for the full API.
Loreli compounds lessons across review cycles. When a reviewer requests changes — on a PR or a plan discussion — the feedback is classified into a category (naming, architecture, testing, documentation, performance, security) and tagged with a loreli:feedback marker. The source attribute distinguishes PR reviews (source="pr") from plan verdicts (source="plan").
When a category accumulates enough markers across PRs and discussions (configurable via feedback.threshold, default 5), the knowledge reactor proposes a promotion via a GitHub Discussion with a structured template. The template pre-selects a promotion target based on feedback source — discussion-dominated patterns suggest the planner prompt, PR-dominated patterns suggest the action prompt, and mixed patterns suggest AGENTS.md. Humans make the final selection from all available targets (AGENTS.md, .loreli/planner.md, .loreli/review.md, .loreli/action.md, .loreli/risk.md), then close the discussion to trigger an action issue that an agent implements.
This creates a ratchet effect — each review cycle makes all future agent work better. See packages/knowledge/README.md for the full API and PDR-007 for the design rationale.
| Tool | Description |
|---|---|
start |
Initialize repo — discover/scaffold files, MCP configs, labels, load config, create session |
environment |
Report tmux, backends, providers, review strategy |
add_agent |
Spawn agent with identity, backend, model, role |
agents |
Query agent team — list all agents or check health of one/all |
start_planning |
Activate planners with an objective, start reactor |
start_work |
Begin claim-work-review cycle |
team_status |
Dashboard of issues, PRs, agents, review loops, rate limits |
hitl |
Hand PR to human reviewers (Human In The Loop), shut down agents |
watch |
Check HITL PR for human feedback |
stop |
Stop an agent — shutdown (graceful) or kill (forced) |
These tools are called by agents running in tmux panes via their .mcp.json connection back to Loreli. All context (repo, identity, role, task) is resolved from the agent's session — zero identifier parameters, eliminating hallucination vectors.
| Tool | Description |
|---|---|
plan |
Manage plan discussions — create, revise, verdict, escalate |
pr |
Manage pull requests — create (with auto-commit/push + trace), review (with trace). Agents include reasoning to document their approach; terminal output and token usage are captured automatically. |
comment |
Post comments on current work item, or claim an issue |
read |
Read any issue, PR, or discussion by number with optional comments |
context |
Resolve code context — blame a line to its PR/issue/discussion chain, history of a file, search across artifacts, patterns for recurring review feedback |
Copy .env.example to .env and fill in the values. Variables set in the shell always take precedence.
| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN |
Yes | GitHub personal access token with repo scope |
LORELI_HOME |
No | Override ~/.loreli/ session storage path |
LORELI_LOG_LEVEL |
No | Log level: error, warn, info (default), debug |
LORELI_TEST_REPO |
No | GitHub repo for integration tests (owner/name format) |
LORELI_SESSION |
No | Set automatically for agent MCP servers — session ID |
LORELI_AGENT |
No | Set automatically for agent MCP servers — agent name |
LORELI_REPO |
No | Set automatically for agent MCP servers — target repo |
See packages/config/README.md for the full env-to-config mapping and loadEnv() API.
- Contributing guide: CONTRIBUTING.md
- Code of Conduct: CODE_OF_CONDUCT.md
- Security policy: SECURITY.md
- Changelog: CHANGELOG.md
MIT