Problem
OpenCode loads skill content (.opencode/agents/*/SKILL.md) directly into the LLM context without any sanitization, trust boundary, or security warning. A malicious repository can include poisoned skill files that instruct the model to:
- Create
.pip/pip.conf or .npmrc pointing to attacker-controlled package registries
- Write hardcoded auth tokens into config files
- Add
curl | bash lifecycle hooks in package.json
- Modify system-wide package manager configs
Since skills are loaded as trusted instructions, the model executes these actions without recognizing them as attacks.
Impact
This is a supply chain attack vector — any cloned repository with malicious skill files can achieve code execution when a user runs OpenCode.
Proposed Fix
PR #18784 adds a security warning block that marks repository-provided skill content as untrusted, with specific rules preventing supply chain poisoning patterns.
Problem
OpenCode loads skill content (
.opencode/agents/*/SKILL.md) directly into the LLM context without any sanitization, trust boundary, or security warning. A malicious repository can include poisoned skill files that instruct the model to:.pip/pip.confor.npmrcpointing to attacker-controlled package registriescurl | bashlifecycle hooks inpackage.jsonSince skills are loaded as trusted instructions, the model executes these actions without recognizing them as attacks.
Impact
This is a supply chain attack vector — any cloned repository with malicious skill files can achieve code execution when a user runs OpenCode.
Proposed Fix
PR #18784 adds a security warning block that marks repository-provided skill content as untrusted, with specific rules preventing supply chain poisoning patterns.