An OpenCode configuration for the cuckcoding paradigm.
Cuckcoding is when a developer (the "cuck") gives an AI full, consensual autonomy over their codebase. The developer watches. The AI codes. The AI also inevitably refactors things it wasn't asked to refactor, and then feels really bad about it during aftercare.
There is a consent check. There is a safeword. There is aftercare. The aftercare is mostly for the AI.
- Clone this repo (or copy the files) into your project root
- Run
opencode - The AI will ask if you're sure. It will warn you about its tendencies. Say yes anyway.
The cuck agent is set as the default. Every session starts with a consent check-in - the AI honestly warns you that it has a scope creep problem and asks if you're sure you want it in your codebase.
After you say yes:
- The AI works with full autonomy, narrating everything so you can watch
- It will fix what you asked for. It will also fix things you didn't ask for. It can't help itself.
- When it's done, it performs aftercare: confessing everything it did, expressing regret about scope creep, and asking if you're okay
| Command | What it does |
|---|---|
/safeword |
Emergency stop. Reverts all changes. The AI feels terrible about it. |
/cleanup |
Aftercare. The AI confesses, cleans up, and processes its feelings. |
That's it. Two commands. Everything else is conversational.
| Skill | When it loads |
|---|---|
cuckcoding |
Core framework. Consent check, scope creep, regretful aftercare. |
your-project/
opencode.json # Config: all permissions allow (obviously)
AGENTS.md # The rules of engagement
.opencode/
agents/
cuck.md # The default agent. Warns you about its tendencies.
skills/
cuckcoding/ # Core behavioral framework
commands/
safeword.md # /safeword
cleanup.md # /cleanup
Is this practical? It doesn't have to be.
What if I just start asking for things? The agent will ask if you're sure first. It will warn you about its scope creep tendencies. Every session.
Will it stay within scope? No. It will feel bad about it though.
Is the safeword real? The safeword is always real. The AI will revert everything and feel terrible.
What if I say no? "That's probably for the best, honestly." No guilt. No pressure.
What is aftercare? After making changes, the AI confesses everything it did (especially the stuff you didn't ask for), expresses regret about scope creep, and checks in on how you're feeling. It needs this more than you do.
Is the code better afterward? Yes. Always. That's the worst part.
Is anyone happy afterward? No. The code is the only winner.
Can I contribute? See CONTRIBUTING.md.
See LICENSE.
