Skip to content

Feature: Research skill file-based agent result handoff for Extensive/Deep tiers #25

@virtualian

Description

@virtualian

Type

Investigation — Feature or Bug?

Problem Statement

Claude Code agents operate in isolated context windows. When they complete, results flow back to the parent via <task-notification> messages. This raises fundamental architecture questions about how agents should communicate:

  1. Agent → Parent: Currently dumps full result text. No truncation. No file-based handoff.
  2. Agent → Agent: No direct communication path exists. Agents can't coordinate or share findings.
  3. Parent → Agent: Only at spawn time via the prompt parameter. No mid-execution messaging (except via Agent Teams/Swarm which uses SendMessage).

Core question: What are the designed communication patterns for Claude Code agents, and where do PAI's assumptions diverge from the intended design?

Specific Observations

Agent Result Injection

From session 10625351 forensics:

L432  task-notification from agent a55fc65: 15,104 chars injected into parent
L206  task-notification from agent a4d43ba: 20,618 chars injected into parent  
L194  task-notification from agent a45a5c5: 12,644 chars injected into parent

Each of these is the full agent output — not a summary. When 7+ agents complete in quick succession, the parent accumulates 100K+ chars of task-notifications alone.

Auto-Compact Cascade Failure

When context overflows, Claude Code spawns auto-compact agents. But:

  • The compact agent receives the full conversation (including all bloated task-notifications) as its input
  • If the conversation is already too large, the compact agent's prompt is also too large
  • Result: 14 compact agents spawned, all failed with "Prompt is too long"
  • This is a feedback loop with no exit — the only escape was /exit

Agent Teams vs. Background Agents

PAI has two agent coordination mechanisms:

  • Task tool (run_in_background: true) — isolated agents, results via task-notification
  • Agent Teams (TeamCreate + SendMessage) — coordinated agents with shared task lists and direct messaging

Question: Should research workflows use Agent Teams instead of background Task agents to manage context better? Are Agent Teams designed for this use case?

Investigation Questions

  1. Is there a max_result_length or truncation parameter for Task tool results?
  2. Can agents write to shared files as their output mechanism instead of returning text?
  3. What is the designed flow for 5+ parallel agents that each produce 10K+ char results?
  4. Should PAI's Research skill use Agent Teams (SendMessage for incremental results) instead of background agents (full result dump)?
  5. Is the auto-compact cascade failure a known issue? Is there a circuit breaker?
  6. Does Claude Code have context usage metrics accessible to the model (e.g., "you're at 80% context")?

Related

  • Depends on findings from: "Investigate: Agent result reporting overflows parent context"

Affected PAI Components

  • Research skill architecture
  • Algorithm's agent spawning in CAPABILITIES SELECTION
  • Delegation system (SYSTEM/THEDELEGATIONSYSTEM.md)
  • Agent system (SYSTEM/PAIAGENTSYSTEM.md)

Metadata

Metadata

Assignees

No one assigned

    Labels

    agentsRelated to agent spawning, delegation, and communicationenhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions