What is this? • Install • Why RLM? • Benchmarks • Development • Contributing
HALO (Hierarchical Agent Loop Optimization) is a methodology for building recursively self-improving agent harnesses using RLMs. This repository contains:
- Information on HALO methodology.
- A Python package that implements the core HALO-RLM engine. View on PyPI
- A demo project that shows how to build HALO loops for your agents using the Python package. View demo
- Benchmarking examples applying HALO to popular agent benchmarks. (View AppWorld).
The core HALO loop is suprisingly simple:
- Collect execution traces from your agent harness. HALO uses OpenTelemetry-compatible tracing.
- Feed traces into HALO-RLM engine.
- The engine decomposes the traces to understand common failure modes and across harness executions and produces a report with it’s findings.
- This report is fed into a coding agent like Cursor or Claude Code to generate and apply a set of changes to your harness.
- The harness is then re-deployed, more traces are gathered, and the cycle repeats.
HALO is great at finding issues in production agent deployments. We find high-traffic environments tend to generate more data with higher variance across executions, creating the type of issues that HALO is great at identifying.
A general-purpose harness like Claude Code is the wrong tool for trace analysis. This isn’t because the model isn’t smart, but because traces can get extremely long, and you need a specialized toolkit in order to make observations about systemic agentic behavior. We noticed in our testing that harnesses like CC would often overfit to an error present in a single/few traces rather than generalize to harness-level problems. This led us to creating a specialized form of a RLM.
Install the HALO engine + CLI from PyPI:
pip install halo-engine
# Verify installation
halo --help- Integrate Tracing
- Collect traces by running your agent
- Run the HALO engine, see the CLI docs for more info
export OPENAI_API_KEY=...
halo path_to_your_traces.jsonl -p "Diagnose errors you find and suggest fixes"We have provided a simple demo and an AppWorld demo.
HALO is consistently capable of driving improvements on benchmarks, solely by optimizing the harness.
We applied HALO to the AppWorld benchmark, a set of agentic tasks that assess the LLM’s ability to use multi-app services like Spotify, Venmo, file systems, and phone contacts. We tested HALO’s ability to improve harnesses for both Gemini 3 Flash and Sonnet 4.6. We iterated on the harness using the dev split, and then used the test_normal split as a proxy to verify that improvements did not come from overfitting.
The feedback from HALO Engine surfaced failures in the harnesses such as hallucinated tool calls, redundant arguments in tools, refusal loops, and semantic correctness issues. Each issue mapped cleanly to a direct prompt edit. HALO’s claims were independently verified from the source trace files with the findings holding up under scrutiny.
The peak improvements over baseline were substantial for both models. For Gemini 3 Flash, dev SGC went from 36.8% to 52.6% (+15.8 points) and test_normal SGC went from 37.5% to 48.2% (+10.7 points). For Sonnet 4.6, dev SGC went from 73.7% to 89.5% (+15.8 points) and test_normal SGC went from 62.5% to 73.2% (+10.7 points).Local development against this repo uses uv for dependency management and go-task as the task runner.
git clone https://github.com/context-labs/HALO
cd HALO
task env:setuptask env:setup installs uv (if missing), syncs the venv from uv.lock, and configures the repo's git hooks. After that, the halo CLI is available via uv run halo ... (or activate .venv/).
Run task --list for the full list. The ones you'll use most:
| Task | What it does |
|---|---|
task check |
Run all pre-commit checks: pinned-versions, lint, format, typecheck, unit tests |
task check:fix |
Same, but auto-fix lint/format issues |
task test:unit |
Unit tests under tests/unit/ |
task test:integration |
Integration tests under tests/integration/ |
Contributions are welcome! Please feel free to submit a pull request.

