-
Notifications
You must be signed in to change notification settings - Fork 15
Description
Problem
When AI agents (like Claude Code) need to process or transform data from a swamp model method run, they default to piping stdout through complex inline shell scripts (python -c, deno eval, etc.) rather than creating a proper extension model or adding a method to an existing one.
This leads to:
- Fragile shell-escaped code that breaks on special characters
- Logic that is untestable and unreusable
- Violations of the "extend, don't be clever" principle from CLAUDE.md
Example
After running listUserRepos, the AI attempted to pipe JSON output through an inline python3 -c script with complex string processing to build a security report. This failed repeatedly due to shell escaping issues. The correct approach was to create a dedicated @bixu/github-security extension model with its own helpers and tests.
Proposed Solution
The swamp-extension-model skill and/or swamp-model skill should include stronger guidance for AI agents:
- When an agent needs to transform or aggregate data from a model method, it should create a new extension model (or add a method to an existing one) rather than processing stdout inline
- The skill should explicitly call out anti-patterns: inline
python3 -c,deno eval,jqpipelines for anything beyond trivial formatting - The decision tree in the skill should include: "Need to process model output? -> Create an extension model method for it"
Scope
Changes would be needed in the swamp skill documentation (swamp-extension-model and swamp-model skills) to add guidance about when to create models vs. when inline processing is acceptable.