A clean-room database of 1,166 deep technical architectural patterns distilled from the source code of state-of-the-art AI coding assistants.
Whenever I am faced with an insurmountable engineering challenge, I ask myself the same question any reasonable professional would: What would my favorite superhero, Vin Diesel, do?
He would probably drive a 1970 Dodge Charger out of a moving cargo plane. Unfortunately, we are building AI agents, not stealing bank vaults in Rio. So the next best thing is to ask: What would Vin Claudel do?
This is not a database of generic blog prose. This is hard technical truth. Every insight contains exact millisecond constants, actual TypeScript interfaces, tool schemas, and raw code evidence extracted directly from a cleanroom implementation of Anthropic's Claude Code.
The Use Case: This is your lifeline for when your AI agent gets stuck.
When your agent starts hallucinating facts, getting trapped in an infinite loop of failing tests, or refusing to background terminal processes correctly, you ask: "What would Vin Claudel do?"
You don't need to install anything permanently. Just run:
npx wwvcd "bash background timeout"or
npx wwvcd "hallucination"The CLI will instantly search the database and print out exactly how state-of-the-art agents structurally solve that specific edge case, including the verbatim ▶ Code Evidence blocks.
If you use OpenClaw, clone this repo into your skills folder (~/.openclaw/workspace/skills/wwvcd). When your agent gets stuck, literally just type in chat:
"You are hallucinating. Query WWVCD for 'verification' and tell me what Vin Claudel would do to fix this."
The previous iterations of this database contained generic, LLM-summarized prose. We nuked it.
We ran a new extraction pipeline directly over the claude-code-src specifications. The current database contains 1,166 high-signal findings (averaging 600+ chars each), featuring:
- Verbatim Code Evidence: 338+ findings include exact TypeScript
code_snippetblocks. - Precise Constants: No more "after a certain time." You get the exact
ASSISTANT_BLOCKING_BUDGET_MSvalues. - Deep Context: Every finding maps to the specific source file and architectural module it was extracted from.
Under the patterns/ folder, we expanded the most critical findings into implementable strategies you can copy-paste into your own prompts today:
01-enumerate-rationalizations.md: Explicitly name and forbid an LLM's internal excuses for hallucinating.02-structural-evidence.md: Force structural citations (no citation = automatic failure).03-read-only-judge.md: The architectural requirement of decoupling generation from evaluation.04-explicit-faithfulness.md: Overriding RLHF "helpfulness" by defining honesty as the highest priority.
Building reliable, autonomous AI agents is not about writing generic "You are a helpful assistant" prompts. It is about defensive, structural prompt engineering.
Advanced AI agents anticipate failure modes. They enumerate exact rationalizations the model will use to cheat or hallucinate, and they enforce strict, parseable evidence requirements for every claim or action.
Vin Claudel doesn't hallucinate. He executes. This repo is the playbook for doing exactly that. Because it's about family. And living life one token at a time.
