Pre-prompt policy enforcement for AI coding assistants.
Your team agreed last quarter that all new auth flows go through your shared auth wrapper. Then a developer opens their AI assistant and asks it to "implement token signing and refresh from scratch." The AI happily produces 200 lines of custom security code. By the time code review catches it three days later, the developer has shipped two more features on top and is deep in a different branch.
Same story when a prompt asks for a database driver that isn't on the approved list, or pulls in a utility library the team migrated off of last year. The rules exist. The AI doesn't know them. The developer forgot — or never read the doc in the first place.
stackguard catches the violation before the prompt reaches the AI. It compares each prompt to your engineering policy doc, flags explicit conflicts, and offers a compliant rewrite — all in about 1.5 seconds, with no infrastructure to set up.
┌────────────────────────┐
Developer ──▶│ "add a DB connection"│
└──────────┬─────────────┘
▼
┌────────────────────────┐
│ stackguard │
│ + policy.md + Haiku │
└──────────┬─────────────┘
│
┌────────────┴────────────┐
▼ ▼
✓ no conflict ⚠ conflict
│ │
│ ┌──────────┴──────────┐
│ │ [P]roceed [R]evise │
│ │ [S]how [C]ancel │
│ └──────────┬──────────┘
▼ ▼
┌─────────────┐ ┌────────────────┐
│ Claude/Cursor│ │ revised prompt │
│ /Copilot │◀────────│ or override │
└─────────────┘ └────────────────┘
A clean prompt is invisible:
$ stackguard check "build a settings page where users can change their display name"
✓ stackguard: ok
A prompt that conflicts with your policy stops and shows you why:
$ stackguard check "add a MongoDB connection for user sessions"
⚠ stackguard: guideline conflict detected
──────────────────────────────────────────────────────────────────────
"add a MongoDB connection for user sessions"
Rule: NEVER use MongoDB, DynamoDB, or any NoSQL store for primary
data.
Why: The prompt names a database that the policy excludes.
Level: HIGH confidence
Suggested revision:
┌──────────────────────────────────────────────────────────┐
│ Add a Postgres-backed user_sessions table accessed via │
│ the team's approved DB wrapper, with a parameterized │
│ query for lookups │
└──────────────────────────────────────────────────────────┘
[P]roceed anyway [R]evise [S]how policy [C]ancel
> r
Use suggested revision? [Y]es / [N]o, type my own: y
✓ stackguard: ok
A possible-but-uncertain conflict surfaces as a soft note and lets the prompt through (see ADR-002):
$ stackguard check "add a date picker that handles timezones nicely"
ℹ stackguard: possible conflict (low confidence — passing through)
"date picker that handles timezones" may conflict with "moment is prohibited…"
For more examples — including Python, Go, fintech, and a deliberately tiny two-person policy — see examples/.
npm install -g stackguard
cd your-project
stackguard init
export ANTHROPIC_API_KEY=sk-ant-...
# Direct check (good for CI / one-offs)
stackguard check "implement token signing from scratch"
# Wrap an AI CLI (catches one-shot invocations)
stackguard wrap -- claude "add a database connection"If your team uses Claude Code, install stackguard as a UserPromptSubmit
hook. Hooks fire on every prompt — including inside the interactive REPL —
so stackguard sees prompts that a shell alias would miss.
cd your-project
stackguard install-hook # writes .claude/settings.json
git add .claude/settings.json # commit so the rest of the team gets itNow every prompt you submit to Claude Code in this project is checked first. A clean prompt passes through invisibly. A blocked prompt shows the violation in the Claude Code UI and waits for you to revise:
> add a MongoDB connection for user sessions
✗ stackguard: prompt blocked by engineering policy
• "add a MongoDB connection for user sessions"
Rule: NEVER use MongoDB, DynamoDB, or any NoSQL store for primary data
Why: The prompt names a database that the policy excludes.
Level: HIGH confidence
Suggested revision:
Add a Postgres-backed user_sessions table accessed via the team's
approved DB wrapper, with a parameterized query for lookups
Edit your prompt and submit again, or run `stackguard policy show`
to read the full policy.
Block mode vs warn mode: in block mode the hook returns exit 2 and
Claude Code refuses to send the prompt. In warn mode the violation gets
injected as a system reminder so the model can see it and respond
accordingly, but the prompt still goes through. Set this in stackguard.json.
Per-project vs global: by default install-hook writes to the
project-local .claude/settings.json so it's committable and team-wide. Pass
--global to write to ~/.claude/settings.json if you want stackguard
running for every project on your machine. Projects without their own
stackguard.json pass through silently in either case.
Removal: stackguard install-hook --uninstall (add --global to match).
| Tool | Use the hook | Use the alias |
|---|---|---|
| Claude Code (interactive REPL) | ✅ | ❌ — REPL prompts bypass the alias |
Claude Code (claude "one-shot") |
✅ | ✅ |
| Cursor / other CLIs without a hook system | ❌ | ✅ |
CI scripts running claude --print |
✅ | ✅ |
The hook is the right answer when it's available. The alias is the fallback for AI CLIs that don't expose a hook protocol yet.
It checks:
- Prompts that name a library your policy excludes
- Prompts that name a database, framework, or service outside your approved stack
- Prompts that ask for security primitives your team has agreed to delegate to a shared wrapper
It does not check:
- Vague prompts ("build a login page") — there's nothing to flag yet
- Code quality, formatting, or naming — that's your linter's job
- Code the AI actually generates — that's code review's job
stackguard is a first line of defense at the prompt layer. It complements, not replaces, linters, CI checks, and code review.
-
Write your policy. Start from
examples/policy.example.md. The more explicit your rules are, the better stackguard performs. Vague guidelines produce vague checks. -
Lock the policy hash. Run
stackguard policy hashand paste the output intostackguard.jsonaspolicyHash. This prevents developers from silently editing the policy to bypass rules. -
Add to onboarding. New developers should run
stackguard initon day one. SetANTHROPIC_API_KEYin their shell profile. -
Make it the default. Have developers add a shell alias:
alias claude='stackguard wrap -- claude'. Now every prompt is checked by default. Opting out is explicit, not accidental. -
Review the audit log weekly.
stackguard audit --days 7shows what was overridden and why. Patterns — the same rule getting overridden by everyone — tell you whether the policy needs updating or whether the rule needs to be enforced harder.
Without policyHash, a developer could edit ENGINEERING_GUIDELINES.md
locally, delete the rule they don't like, and stackguard would
silently accept the modified policy. With policyHash set, any
modification produces a hash mismatch and stackguard refuses to run
until the team's official hash is updated.
This makes policy updates a deliberate, reviewable act:
- SME or engineering lead edits the policy
- Runs
stackguard policy hash - Updates
policyHashinstackguard.json - Both changes go through PR review together
- Prompts are sent to the Anthropic API, the same place your AI assistant already sends them. stackguard adds no new third party.
- Your policy document stays local (or on a URL you control).
- There are no stackguard servers. The tool is a CLI; the only network call it makes is to the Anthropic API.
- The audit log is local at
~/.stackguard/audit.jsonl. Sharing it with your team is opt-in.
See CONTRIBUTING.md. PRs welcome — please open an issue first for non-trivial changes.
MIT