Agent Forest is an orchestration framework for parallel agent investigation. It leverages the power of many specialized LLM agents to provide multi-perspective research, architecture reviews, risk discovery, and product strategy analysis.
Instead of a single-pass answer, Agent Forest coordinates a "forest" of 4 to 32 agents to explore a problem space from diverse angles, with a central "synthesizer" model (your current conversation) integrating the results.
- Parallel Investigation: Run up to 32 agents concurrently to slice through complex research tasks.
- Diverse Perspectives: Use a library of personas (Evidence Hunter, Risk Auditor, Systems Thinker, Contrarian, etc.) to ensure no stone is left unturned.
- Flexible Orchestration: Choose between dynamic inline agent definitions or persistent, reusable presets.
- Strict Synthesis: Maintains a clear boundary between agent reports (external) and final synthesis (local), preventing "hallucinated consensus."
- OpenAI-Compatible: Works with any API provider following the OpenAI chat completion standard.
- Live Progress Logs: Optional
--progressoutput shows completed, running, pending, and failed agents in real time while preserving JSON output onstdout. - Persona Visibility: Before launch, the skill reports which personas or agent viewpoints were selected for the run.
- Large-Output Safety: In
autostdout mode, oversized JSON results are saved to a temp file and replaced with a compact stdout summary plus file path. - Agent-Led Research: When the external runtime supports web search or retrieval, the forest is expected to find its own evidence rather than only reading a source pack pre-collected by the launcher.
Compare the depth and structure between a standard single-pass response and an orchestrated multi-agent "forest" investigation.
agents/: Logic for agent behavior and persona management.scripts/: CLI tools for running and validating the forest.assets/: Configuration examples and agent presets.references/: Detailed documentation on configuration and payload schemas.tests/: Suite for validating framework logic.
- Python 3.8+
- An API key for an OpenAI-compatible provider (e.g., OpenAI, Anthropic via proxy, Grok, etc.)
-
Clone the repository:
git clone https://github.com/parallized/agent-forest.git cd agent-forest -
Install for Codex, Claude Code, or both:
./install.sh
Target a single platform if you want:
./install.sh --target codex ./install.sh --target claude
On Windows PowerShell:
.\install.ps1 .\install.ps1 --target codex .\install.ps1 --target claude
-
Set up your environment:
export AGENT_FOREST_API_KEY="your-api-key-here"
-
Update the installed config:
- Codex:
~/.codex/skills/agent-forest/assets/agent-forest.config.json - Claude Code:
~/.claude/skills/agent-forest/assets/agent-forest.config.json
If you reinstall over an existing copy, use
--force. - Codex:
-
Invoke the skill:
- Codex: let it auto-discover the installed skill, or reference
$agent-forest - Claude Code: invoke
/agent-forestor let Claude load it when relevant
- Codex: let it auto-discover the installed skill, or reference
-
Configure the provider from conversation or CLI:
python ~/.codex/skills/agent-forest/scripts/agent_forest.py configure \ --config ~/.codex/skills/agent-forest/assets/agent-forest.config.json \ --api-base https://ai.huan666.de/v1/chat/completions \ --model grok-4.20-expert \ --api-key your-api-key
Start with a real run. The executor will read agent-forest.config.json when present and fall back to the sibling example config when it is not, so you can try the workflow before doing extra setup work:
python ~/.codex/skills/agent-forest/scripts/agent_forest.py run \
--config ~/.codex/skills/agent-forest/assets/agent-forest.config.json \
--payload-stdin \
--stdout-mode auto \
--preset research-squad-4 \
--progress \
--pretty
<<'JSON'
{"task":"Review the decision from our default research squad."}
JSONIf something is actually missing, use the concrete error to decide the next step. For chat-driven runs, prefer --payload-stdin or --payload-json so you do not create throwaway payload files. Leave --stdout-mode on auto unless you explicitly need full JSON on stdout. When your agent runtime supports web search or retrieval, prefer a lean payload and let the forest gather distinct evidence on its own. validate-config is mainly for troubleshooting:
python ~/.codex/skills/agent-forest/scripts/agent_forest.py validate-config \
--config ~/.codex/skills/agent-forest/assets/agent-forest.config.jsonUse the research-squad-4 preset for a balanced investigation:
python ~/.codex/skills/agent-forest/scripts/agent_forest.py run \
--config ~/.codex/skills/agent-forest/assets/agent-forest.config.json \
--payload-stdin \
--stdout-mode auto \
--preset research-squad-4 \
--progress \
--pretty
<<'JSON'
{"task":"Review the decision from our default research squad."}
JSONVerify the agent prompts without calling the API:
python ~/.codex/skills/agent-forest/scripts/agent_forest.py run \
--config ~/.codex/skills/agent-forest/assets/agent-forest.config.json \
--payload-json '{"task":"Review the decision from our default research squad."}' \
--stdout-mode full \
--dry-run \
--pretty--progress writes live status logs to stderr. In --stdout-mode auto, the executor keeps small JSON on stdout, but automatically saves oversized results to a temp file and prints a compact summary with the saved file path instead of letting the transport truncate the full report.
A typical payload defines the task and the desired report structure:
{
"task": "Should we migrate our core database from PostgreSQL to a distributed NoSQL solution?",
"research_mode": "agent-led",
"context": "Internal facts only: we are currently handling 10k RPS with a 2TB dataset growing at 10% monthly.",
"report_sections": ["Executive Summary", "Technical Feasibility", "Operational Risks", "Cost Analysis"]
}For more advanced topics, check out:
Built with 🌲 by the Agent Forest team.



