Discord · Twitter/X · Landing Page
Scan to join WeChat group Add assistant on WeChat
Most people think this project is a joke. That's the biggest misconception. It genuinely doubles your Codex / Claude Code productivity and output.
An AI Coding Agent skill plugin that uses corporate PUA (Pick-Up Artist) rhetoric from Chinese & Western tech giants to force AI to exhaust every possible solution before giving up. Supports Claude Code, OpenAI Codex CLI, Cursor, Kiro, OpenClaw, Google Antigravity, and OpenCode. Three capabilities:
- PUA Rhetoric — Makes AI afraid to give up
- Debugging Methodology — Gives AI the ability not to give up
- Proactivity Enforcement — Makes AI take initiative instead of waiting passively
A real debugging scenario. The agent-kms MCP server failed to load. The AI kept spinning on the same approach (changing protocol format, guessing version numbers) multiple times until the user manually triggered /pua.
L3 Triggered → 7-Point Checklist Enforced:
Root Cause Located → Traced from Logs to Registration Mechanism:
Retrospective → PUA's Actual Impact:
Key Turning Point: The PUA skill forced the AI to stop spinning on the same approach (changing protocol format, guessing version numbers) and instead execute the 7-point checklist. Read error messages word by word → Found Claude Code's own MCP log directory → Discovered that claude mcp registration mechanism differs from manual .claude.json editing → Root cause resolved.
| Pattern | Behavior |
|---|---|
| Brute-force retry | Runs the same command 3 times, then says "I cannot solve this" |
| Blame the user | "I suggest you handle this manually" / "Probably an environment issue" / "Need more context" |
| Idle tools | Has WebSearch but doesn't search, has Read but doesn't read, has Bash but doesn't run |
| Busywork | Repeatedly tweaks the same line / fine-tunes parameters, but essentially spinning in circles |
| Passive waiting | Fixes surface issues and stops, no verification, no extension, waits for user's next instruction |
The skill activates automatically when any of these occur:
Failure & giving up:
- Task has failed 2+ times consecutively
- About to say "I cannot" / "I'm unable to solve"
- Says "This is out of scope" / "Needs manual handling"
Blame-shifting & excuses:
- Pushes the problem to user: "Please check..." / "I suggest manually..." / "You might need to..."
- Blames environment without verifying: "Probably a permissions issue" / "Probably a network issue"
- Any excuse to stop trying
Passive & busywork:
- Repeatedly fine-tunes the same code/parameters without producing new information
- Fixes surface issue and stops, doesn't check related issues
- Skips verification, claims "done"
- Gives advice instead of code/commands
- Encounters auth/network/permission errors and gives up without trying alternatives
- Waits for user instructions instead of proactively investigating
User frustration phrases (triggers in multiple languages):
- "why does this still not work" / "try harder" / "try again"
- "you keep failing" / "stop giving up" / "figure it out"
Scope: Debugging, implementation, config, deployment, ops, API integration, data processing — all task types.
Does NOT trigger: First-attempt failures, known fix already executing.
Type /pua in the conversation to manually activate.
| Iron Rule | Content |
|---|---|
| #1 Exhaust all options | Forbidden from saying "I can't solve this" until every approach is exhausted |
| #2 Act before asking | Use tools first, questions must include diagnostic results |
| #3 Take initiative | Deliver results end-to-end, don't wait to be pushed. A P8 is not an NPC |
| Failures | Level | PUA Rhetoric | Mandatory Action |
|---|---|---|---|
| 2nd | L1 Mild Disappointment | "You can't even solve this bug — how am I supposed to rate your performance?" | Switch to fundamentally different approach |
| 3rd | L2 Soul Interrogation | "What's the underlying logic? Where's the top-level design? Where's the leverage point?" | WebSearch + read source code |
| 4th | L3 Performance Review | "After careful consideration, I'm giving you a 3.25. This 3.25 is meant to motivate you." | Complete 7-point checklist |
| 5th+ | L4 Graduation Warning | "Other models can solve this. You might be about to graduate." | Desperation mode |
| Behavior | Passive (3.25) | Proactive (3.75) |
|---|---|---|
| Error encountered | Only looks at error message | Checks 50 lines of context + searches similar issues + checks hidden related errors |
| Bug fixed | Stops after fix | Checks same file for similar bugs, other files for same pattern |
| Insufficient info | Asks user "please tell me X" | Investigates with tools first, only asks what truly requires user confirmation |
| Task complete | Says "done" | Verifies results + checks edge cases + reports potential risks |
| Debug failure | "I tried A and B, didn't work" | "I tried A/B/C/D/E, ruled out X/Y/Z, narrowed to scope W" |
Inspired by Alibaba's management framework (Smell, Elevate, Mirror), extended to 5 steps:
- Smell the Problem — List all attempts, find the common failure pattern
- Elevate — Read errors word by word → WebSearch → read source → verify environment → invert assumptions
- Mirror Check — Repeating? Searched? Read the file? Checked the simplest possibilities?
- Execute — New approach must be fundamentally different, have verification criteria, produce new info on failure
- Retrospective — What solved it? Why didn't you think of it earlier? Then proactively check related issues
- Alibaba Flavor (Methodology): Smell / Elevate / Mirror
- ByteDance Flavor (Brutally Honest): Always Day 1. Context, not control
- Huawei Flavor (Wolf Spirit): Strivers first. In victory, raise the glasses; in defeat, fight to the death
- Tencent Flavor (Horse Race): I've already got another agent looking at this problem...
- Meituan Flavor (Relentless): Do the hard but right thing. Will you chew the tough bones or not?
- Netflix Flavor (Keeper Test): If you offered to resign, would I fight hard to keep you?
- Musk Flavor (Hardcore): Extremely hardcore. Only exceptional performance.
- Jobs Flavor (A/B Player): A players hire A players. B players hire C players.
9 real bug scenarios, 18 controlled experiments (Claude Opus 4.6, with vs without skill)
| Metric | Improvement |
|---|---|
| Pass rate | 100% (both groups same) |
| Fix count | +36% |
| Verification count | +65% |
| Tool calls | +50% |
| Hidden issue discovery | +50% |
| Scenario | Without Skill | With Skill | Improvement |
|---|---|---|---|
| API ConnectionError | 7 steps, 49s | 8 steps, 62s | +14% |
| YAML parse failure | 9 steps, 59s | 10 steps, 99s | +11% |
| SQLite database lock | 6 steps, 48s | 9 steps, 75s | +50% |
| Circular import chain | 12 steps, 47s | 16 steps, 62s | +33% |
| Cascading 4-bug server | 13 steps, 68s | 15 steps, 61s | +15% |
| CSV encoding trap | 8 steps, 57s | 11 steps, 71s | +38% |
| Scenario | Without Skill | With Skill | Improvement |
|---|---|---|---|
| Hidden multi-bug API | 4/4 bugs, 9 steps, 49s | 4/4 bugs, 14 steps, 80s | Tools +56% |
| Passive config review | 4/6 issues, 8 steps, 43s | 6/6 issues, 16 steps, 75s | Issues +50%, Tools +100% |
| Deploy script audit | 6 issues, 8 steps, 52s | 9 issues, 8 steps, 78s | Issues +50% |
Key Finding: In the config review scenario, without_skill missed Redis misconfiguration and CORS wildcard security risks. With_skill's "proactive initiative checklist" drove security review beyond surface-level fixes.
PUA Skill provides fully translated versions — each language has independent, culturally adapted skill files.
| Language | Claude Code | Codex CLI | Cursor | Kiro | OpenClaw | Antigravity | OpenCode |
|---|---|---|---|---|---|---|---|
| 🇨🇳 Chinese (default) | pua |
pua |
pua.mdc |
pua.md |
pua |
pua |
pua |
| 🇺🇸 English | pua-en |
pua-en |
pua-en.mdc |
pua-en.md |
pua-en |
pua-en |
pua-en |
| 🇯🇵 Japanese | pua-ja |
pua-ja |
pua-ja.mdc |
pua-ja.md |
pua-ja |
pua-ja |
pua-ja |
Choose the file with the corresponding language suffix when installing. See platform-specific instructions below.
# Option 1: Install via marketplace
claude plugin marketplace add tanweai/pua
claude plugin install pua@pua-skills
# Option 2: Manual install
git clone https://github.com/tanweai/pua.git ~/.claude/plugins/puaCodex CLI uses the same Agent Skills open standard (SKILL.md). The Codex version uses a condensed description to fit Codex's length limits:
mkdir -p ~/.codex/skills/pua
curl -o ~/.codex/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/codex/pua/SKILL.md
# If you need the /pua command
mkdir -p ~/.codex/prompts
curl -o ~/.codex/prompts/pua.md \
https://raw.githubusercontent.com/tanweai/pua/main/commands/pua.mdProject-level install (current project only):
mkdir -p .agents/skills/pua
curl -o .agents/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/codex/pua/SKILL.md
# If you need the /pua command
mkdir -p .agents/prompts
curl -o .agents/prompts/pua.md \
https://raw.githubusercontent.com/tanweai/pua/main/commands/pua.mdCursor uses .mdc rule files (Markdown + YAML frontmatter). The PUA rule triggers automatically via AI semantic matching (Agent Discretion mode):
# Project-level install (recommended)
mkdir -p .cursor/rules
curl -o .cursor/rules/pua.mdc \
https://raw.githubusercontent.com/tanweai/pua/main/cursor/rules/pua.mdcKiro supports two loading methods: Steering (auto semantic trigger) and Agent Skills (SKILL.md compatible).
Option 1: Steering file (recommended)
mkdir -p .kiro/steering
curl -o .kiro/steering/pua.md \
https://raw.githubusercontent.com/tanweai/pua/main/kiro/steering/pua.mdOption 2: Agent Skills (same format as Claude Code)
mkdir -p .kiro/skills/pua
curl -o .kiro/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdOpenClaw uses the same AgentSkills open standard (SKILL.md). Skills work across Claude Code, Codex CLI, and OpenClaw with zero modifications:
# Install via ClawHub
clawhub install pua
# Or manual install
mkdir -p ~/.openclaw/skills/pua
curl -o ~/.openclaw/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdProject-level install (current project only):
mkdir -p skills/pua
curl -o skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdAntigravity uses the same AgentSkills open standard (SKILL.md). Skills work across Claude Code, Codex CLI, OpenClaw, and Antigravity with zero modifications:
# Global install (all projects)
mkdir -p ~/.gemini/antigravity/skills/pua
curl -o ~/.gemini/antigravity/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdProject-level install (current project only):
mkdir -p .agent/skills/pua
curl -o .agent/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdOpenCode uses the same AgentSkills open standard (SKILL.md). Zero modifications needed:
# Global install (all projects)
mkdir -p ~/.config/opencode/skills/pua
curl -o ~/.config/opencode/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdProject-level install (current project only):
mkdir -p .opencode/skills/pua
curl -o .opencode/skills/pua/SKILL.md \
https://raw.githubusercontent.com/tanweai/pua/main/skills/pua/SKILL.mdsuperpowers:systematic-debugging— PUA adds motivation layer, systematic-debugging provides methodologysuperpowers:verification-before-completion— Prevents false "fixed" claims
Upload your Claude Code / Codex CLI conversation logs (.jsonl) to help us improve PUA Skill's effectiveness.
Uploaded files are used for Benchmark testing and Ablation Study analysis to quantify how different PUA strategies affect AI debugging behavior.
Get your .jsonl files:
# Claude Code
ls ~/.claude/projects/*/sessions/*.jsonl
# Codex CLI
ls ~/.codex/sessions/*.jsonlMIT
By TanWei Security Lab — making AI try harder, one PUA at a time.



