Bug: experimental.chat.system.transform hook mutations silently discarded by runtime
Observed Behavior
The experimental.chat.system.transform hook fires correctly, and mutations to output.system execute without error within the plugin. However, the Go runtime silently discards all mutations before the system prompt reaches the LLM.
System prompt size remains constant across all hook invocations regardless of what the plugin appends or modifies — confirming the mutations are ignored at the platform level, not failing at the plugin level.
Expected Behavior
Mutations to output.system within the experimental.chat.system.transform hook should be reflected in the system prompt delivered to the LLM, per the type contract in @opencode-ai/plugin.
Reproduction
Two independent plugins were built to verify this behavior:
- Probe plugin — Minimal plugin that appends a marker string to
output.system. The marker never appears in the system prompt delivered to the model.
- Skill injector plugin — Full implementation that injects skill content into
output.system. Content is never delivered to the model.
Both plugins:
- Register correctly and load without error
- Hook fires on every qualifying event
- Mutations execute without throwing
- System prompt size is unchanged after mutation
The type contract (@opencode-ai/plugin v1.2.20) specifies a void return with mutation-based side effects. The plugins follow this contract exactly.
Evidence
- 37 unit tests pass on the plugin side — the plugin correctly mutates the object it receives
output.system.length logged before and after mutation shows the mutation takes effect in-process
- System prompt measured at the LLM shows no change — the runtime does not propagate the mutations
Environment
- OpenCode version: Latest (as of March 2026)
- Plugin SDK:
@opencode-ai/plugin v1.2.20
- Runtime: Go
- Model: Claude (Anthropic)
Impact
The experimental.chat.system.transform hook is unusable for its intended purpose — dynamically modifying system prompts based on session context. This blocks plugin-driven skill injection, context-aware system prompt modification, and any use case that relies on this hook.
Possibly Related
The tool.execute.after hook may exhibit similar behavior (mutations to tool output not propagated). We have not independently verified this but observed references to similar patterns.
Bug:
experimental.chat.system.transformhook mutations silently discarded by runtimeObserved Behavior
The
experimental.chat.system.transformhook fires correctly, and mutations tooutput.systemexecute without error within the plugin. However, the Go runtime silently discards all mutations before the system prompt reaches the LLM.System prompt size remains constant across all hook invocations regardless of what the plugin appends or modifies — confirming the mutations are ignored at the platform level, not failing at the plugin level.
Expected Behavior
Mutations to
output.systemwithin theexperimental.chat.system.transformhook should be reflected in the system prompt delivered to the LLM, per the type contract in@opencode-ai/plugin.Reproduction
Two independent plugins were built to verify this behavior:
output.system. The marker never appears in the system prompt delivered to the model.output.system. Content is never delivered to the model.Both plugins:
The type contract (
@opencode-ai/pluginv1.2.20) specifies a void return with mutation-based side effects. The plugins follow this contract exactly.Evidence
output.system.lengthlogged before and after mutation shows the mutation takes effect in-processEnvironment
@opencode-ai/pluginv1.2.20Impact
The
experimental.chat.system.transformhook is unusable for its intended purpose — dynamically modifying system prompts based on session context. This blocks plugin-driven skill injection, context-aware system prompt modification, and any use case that relies on this hook.Possibly Related
The
tool.execute.afterhook may exhibit similar behavior (mutations to tool output not propagated). We have not independently verified this but observed references to similar patterns.