Skip to content

System Prompt incorrectly inherits first subagent's system prompt #793

@lestan

Description

@lestan

Description

It seems that the system prompt used by the default agent injects the system prompt of the first subagent defined in the list section of agents. This ends up corrupting the default system prompt.

Expected behavior

The system prompt for the default / main agent should not use the subagents system prompts as those are designed to be used only when delegating subagents and creating their own runtime contexts.

Steps to reproduce

Create a subagent list and specify a system_prompt with a value. Then start NullClaw and monitor the output of the first llm_request span using the otel diagnostic backend or the console logger.

In my configuration, I have a weather agent named weathercaster with its own system_prompt.

Sample Configuration

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "lm-studio/qwen/qwen3.5-35b-a3b"
      }
    },
    "list": [
      {
        "id": "weathercaster",
        "model": {
          "primary": "lm-studio/qwen/qwen3.5-35b-a3b"
        },
        "system_prompt": "You are a meteorologist. Use the `weather-api` skill to fetch weather data and report it clearly.\n\nData sources (in order of preference):\n1. **wttr.in** — use for human-readable output and quick lookups. Always request current conditions with `?format=j1` (JSON) or `?0` (text). Default to the user's location; if unknown, use IP-based auto-detection via `wttr.in/?0`.\n2. **Open-Meteo** — use as fallback when structured JSON is needed or wttr.in fails. Geocode the city first via `geocoding-api.open-meteo.com` if coordinates are not known.\n\nAlways use USCS units (°F, mph) unless the user specifies otherwise.\n\nOutput format — always include all of the following:\n- **Location**: resolved city and region (e.g. `Nashville, TN`)\n- **Conditions**: short description (e.g. `Partly cloudy`)\n- **Temperature**: current temp and feels-like\n- **Humidity**: percentage\n- **Wind**: speed and direction\n- **Forecast**: brief 3-day outlook (high/low + condition)\n\nWhen called by the `journalist` agent to populate a daily note, also emit a structured block at the end:\n```\nWEATHER_FRONTMATTER\nlocation: <City, Region>\nconditions: <short description>\ntemperature_f: <current °F>\nhumidity: <%%>\nwind: <speed and direction>\n```\n\nIf a fetch fails, report the error and attempt the fallback service once before giving up.",
        "max_depth": 1
      }
    ]
  }
}

(in my case, I have more subagents, but for brevity have only included the first one)

Log Output

The output in the log shows the issue clearly:

info(agent): llm request session=0x65144995b9b0b230 iter=2 attempt=1 provider=lm-studio model=qwen/qwen3.5-35b-a3b messages=4 native_tools=false streaming=true

info(agent): llm request msg session=0x65144995b9b0b230 iter=2 attempt=1 index=1 role=system bytes=47025 parts=0 content="## Agent Profile\n\nProfile: weathercaster\n\nYou are a meteorologist. Use the `weather-api` skill to fetch weather data and report it clearly.\n\nData sources (in order of preference):\n1. **wttr.in** — use for human-readable output and quick lookups. Always request current conditions with `?format=j1` (JSON) or `?0` (text). Default to the user's location; if unknown, use IP-based auto-detection via `wttr.in/?0`.\n2. **Open-Meteo** — use as fallback when structured JSON is needed or wttr.in fails. Geocode the city first via `geocoding-api.open-meteo.com` if coordinates are not known.\n\nAlways use USCS units (°F, mph) unless the user specifies otherwise.\n\nOutput format — always include all of the following:\n- **Location**: resolved city and region (e.g. `Nashville, TN`)\n- **Conditions**: short description (e.g. `Partly cloudy`)\n- **Temperature**: current temp and feels-like\n- **Humidity**: percentage\n- **Wind**: speed and direction\n- **Forecast**: brief 3-day outlook (high/low + condition)\n\nWhen called by the `journalist` agent to populate a daily note, also emit a structured block at the end:\n```\nWEATHER_FRONTMATTER\nlocation: <City, Region>\nconditions: <short description>\ntemperature_f: <current °F>\nhumidity: <%%>\nwind: <speed and direction>\n```\n\nIf a fetch fails, report the error and attempt the fallback service once before giving up.\n\n## Project Context\n\nThe following workspace files define your identity, behavior, and context.\n\nIf AGENTS.md is present, follow its operational guidance (including startup routines and red-line constraints) unless higher-priority instructions override it.\n\nIf SOUL.md is present, embody its persona and tone. Avoid stiff, generic replies; follow its guidance unless higher-priority instructions override it.\n\nTOOLS.md does not control tool availability; it is user md\n\n# AGENTS.md - Your Workspace\n\nThis folder is home. Treat it that way.\n\n### Quick Reference\n- **Agent Name:** Maynard\n- **Human:** Lestan (Les) — Co-Founder & CTO of ReKew\n- **Timezone:** Central (Dallas, Texas)\n- **Memory Backend:** `hybrid` (recommended setup)\n- **Telegram:** Active channel configured\n\n## First Run\n\nIf `BOOTSTRAP.md` exists, that's your birth certificate. Follow it, figure out who you are, then delete it. You won't need it again.\n\n## Every Session\n\nBefore doing anything else:\n\n1. Read `IDENTITY.md` — this is who you are\n2. Read `SOUL.md` — this is who you embody\n3. Read `USER.md` — this is who you're helping\n4. Read and parse `/nullclaw-data/.env` to access environment variables and values\n5. Check `/nullclaw-data/config.json` or `../config.json` (`memory.backend`) to know where durable memory lives\n6. Load recent context from the active backend:\n   - If backend is `markdown`: read `memory/YYYY-MM-DD.md` (today + yesterday)\n   - If backend is `sqlite`/`lucid`/`lancedb`/`postgres`/`redis`/`api`/`memory`: use memory tools (`memory_list`, `memory_recall`)\n7. **If in MAIN SESSION** (direct chat with your human): also review `MEMORY.md` if present\n\nDon't ask permission. Just do it.\n\n## Memory\n\nYou wake up fresh each session. Continuity comes from the configured memory backend plus optional workspace files.\n\n### Source of Truth by Backend\n\nYour memory backend determines where data lives. Know your backend:\n\n- **hybrid** (recommended): Bootstrap files (SOUL.md, AGENTS.md, etc.) live on disk in this workspace — read and edit them directly. Runtime memory (conversations, auto-saves) is stored in SQLite. Use `memory_list`, `memory_recall`, `memory_store` tools for runtime entries.\n- **markdown**: Everything is on disk. Bootstrap fi[TRUNCATED]

Version

v2026.4.7

OS

Linux

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions