Problem
The skills system auto-discovers all SKILL.md files in ~/.config/opencode/skills/ and injects their names + descriptions into the system prompt at every session startup via the available_skills block:
```xml
<available_skills>
caveman
Terse caveman-speak mode. Cuts ~75% of output tokens while keeping
full technical accuracy. Drop articles, filler words, pleasantries and hedging.
Keep code, file paths, commands, URLs unchanged. Supports levels lite/full/ultra
and wenyan (classical Chinese). Activate when user says "caveman", "talk like
caveman", "less tokens", or loads this skill.
file:///home/user/.config/opencode/skills/caveman/SKILL.md
...
</available_skills>
```
The full SKILL.md content is already lazy — it's only injected when the skill is explicitly activated. That part works well. The problem is the descriptions: every installed skill adds ~80–150 tokens to the system prompt unconditionally, even if that skill is never used in the session.
With a modest collection of 10 skills this is ~1,000–1,500 tokens of fixed overhead per session. It grows linearly with the number of installed skills, penalizing users who maintain a library of skills for different workflows.
Proposed solution
Remove skill descriptions from the system prompt entirely. Skills should be discoverable through the TUI (tab-completion, a /skills command, or ctrl+p) without requiring any LLM context budget.
Concretely:
available_skills block is removed from the system prompt
- Installed skills are surfaced only at the TUI level (autocomplete on
/, keybind list, etc.)
/skill-name still injects the full SKILL.md into the conversation on demand — no change there
- The
instructions field in opencode.json continues to work as today for users who explicitly want eager loading
This keeps the useful part of the current design (on-demand injection) while eliminating the unconditional token cost.
Impact
| Skills installed |
Current overhead (approx) |
Proposed overhead |
| 3 |
~350 tokens/session |
0 |
| 10 |
~1,200 tokens/session |
0 |
| 20 |
~2,400 tokens/session |
0 |
Fully backwards-compatible: skills in instructions continue to load eagerly.
Problem
The skills system auto-discovers all SKILL.md files in
~/.config/opencode/skills/and injects their names + descriptions into the system prompt at every session startup via theavailable_skillsblock:```xml
<available_skills>
caveman
Terse caveman-speak mode. Cuts ~75% of output tokens while keeping
full technical accuracy. Drop articles, filler words, pleasantries and hedging.
Keep code, file paths, commands, URLs unchanged. Supports levels lite/full/ultra
and wenyan (classical Chinese). Activate when user says "caveman", "talk like
caveman", "less tokens", or loads this skill.
file:///home/user/.config/opencode/skills/caveman/SKILL.md
...
</available_skills>
```
The full SKILL.md content is already lazy — it's only injected when the skill is explicitly activated. That part works well. The problem is the descriptions: every installed skill adds ~80–150 tokens to the system prompt unconditionally, even if that skill is never used in the session.
With a modest collection of 10 skills this is ~1,000–1,500 tokens of fixed overhead per session. It grows linearly with the number of installed skills, penalizing users who maintain a library of skills for different workflows.
Proposed solution
Remove skill descriptions from the system prompt entirely. Skills should be discoverable through the TUI (tab-completion, a
/skillscommand, orctrl+p) without requiring any LLM context budget.Concretely:
available_skillsblock is removed from the system prompt/, keybind list, etc.)/skill-namestill injects the full SKILL.md into the conversation on demand — no change thereinstructionsfield inopencode.jsoncontinues to work as today for users who explicitly want eager loadingThis keeps the useful part of the current design (on-demand injection) while eliminating the unconditional token cost.
Impact
Fully backwards-compatible: skills in
instructionscontinue to load eagerly.