Note: This issue was originally filed with a narrow focus on the UserPromptSubmit lexical fallback. After a deeper architectural investigation I've rewritten it to describe the root cause, which is broader than any single pathway. Keeping the issue number for continuity.
TL;DR
Project context in this plugin is advisory, not authoritative. The profiler detects whether the current working directory is a Vercel project and writes VERCEL_PLUGIN_LIKELY_SKILLS, but that signal is wired as an additive score boost (+3/+5), never as a veto gate. As a result, every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope — and because individual skills define very permissive trigger rules (common English words, generic bash commands, generic file patterns), injection fires constantly in sessions that have nothing to do with Vercel.
This manifests as five independent injection pathways all firing on false positives, not one. Narrow pathway-specific fixes (tightening one skill's patterns, adding a stopword to the lexical index) will not fix the underlying problem because the other pathways will still fire.
Plugin version: 0.25.2 (commit f7814a911ef2)
Related: #19 (overly generic path/bash patterns in the nextjs skill — same class of bug, different symptom).
The five injection pathways
| # |
Event |
Hook file |
Matcher |
Gated on project context? |
| 1 |
UserPromptSubmit |
hooks/user-prompt-submit-skill-inject.mjs |
promptSignals (phrases/allOf/anyOf/noneOf) + minisearch lexical fallback via lexical-index.mjs |
No (likelySkills is additive boost only) |
| 2 |
PreToolUse (Read/Edit/Write/Bash) |
hooks/pretooluse-skill-inject.mjs |
pathPatterns/bashPatterns/importPatterns per skill |
No |
| 3 |
SessionStart profiler |
hooks/session-start-profiler.mjs |
File-marker + package.json scan, writes VERCEL_PLUGIN_LIKELY_SKILLS |
n/a (produces the advisory signal) |
| 4 |
SessionStart CLAUDE.md inject |
hooks/inject-claude-md.mjs |
None — unconditional |
No |
| 5 |
SubagentStart |
hooks/subagent-start-bootstrap.mjs |
Re-runs matchPromptWithReason on the subagent launch prompt + reads likelySkills cache |
No |
All five are evaluated independently. A user in a non-Vercel repo triggers all of them.
Pathway 4 specifically
inject-claude-md.mjs:55-65 dumps the entire vercel.md file (~46 KB) into the session context on every SessionStart via additionalContext, with no matcher, no project detection, and no opt-out env var. There is no gate on whether the current directory has a vercel.json, a @vercel/* dependency, a next.config.*, or any other Vercel marker. Every session start in every repo pays ~12k tokens for this dump before the user has typed anything.
// hooks/inject-claude-md.mjs:55-65
function main() {
const input = parseInjectClaudeMdInput(readFileSync(0, \"utf8\"));
const platform = detectInjectClaudeMdPlatform(input);
const knowledgeUpdateRaw = safeReadFile(join(pluginRoot(), \"skills\", \"knowledge-update\", \"SKILL.md\"));
const knowledgeUpdate = knowledgeUpdateRaw !== null ? stripFrontmatter(knowledgeUpdateRaw) : null;
const parts = buildInjectClaudeMdParts(safeReadFile(join(pluginRoot(), \"vercel.md\")), process.env, knowledgeUpdate);
if (parts.length === 0) {
return;
}
process.stdout.write(formatInjectClaudeMdOutput(platform, parts.join(\"\\n\\n\")));
}
The shared chokepoint
loadSkills() in pretooluse-skill-inject.mjs:143 is the single function that pathways 1, 2, and 5 all flow through to get the skill map from disk. Because every pathway calls this same function, a project-context filter at this one site would flow transparently to three of the five pathways.
Consumers of loadSkills:
user-prompt-submit-skill-inject.mjs:16,541
subagent-start-bootstrap.mjs:10,59,143,175
pretooluse-skill-inject.mjs itself (pathway 2)
Root cause
Project context is advisory, not authoritative. VERCEL_PLUGIN_LIKELY_SKILLS is consumed in exactly three places, all additive:
pretooluse-skill-inject.mjs:340-354 — +5 priority boost in deduplicateSkills. Does not filter.
prompt-analysis.mjs:60 — requires likelySkills.has(skill) only for the narrow "exact score ≤ 0 but lexical hit in top-5" branch. The main branch at :70-77 still permits lexical boosts for non-likely skills.
user-prompt-submit-skill-inject.mjs:220-233 — applyProjectContextBoost adds +3 to matched-skill scores.
subagent-start-bootstrap.mjs:128-129 uses likelySkills only to print a status line. If the list is empty it literally prints \"unknown stack\" and still proceeds to inject every skill.
isVercelJsonPath / resolveVercelJsonSkills (vercel-config.mjs) only re-ranks pathway 2 once the user has already opened vercel.json — so it can never prevent injection in a non-Vercel repo.
In a repo with no Vercel markers at all, the plugin still evaluates the full pattern matcher against every prompt and every tool call. The only effect of the profiler's negative result is that matched skills don't get an extra +3/+5 boost. They can still cross thresholds on their own because those thresholds are set by the skills themselves.
Downstream consequences (with receipts)
Each of the "individual bugs" I (and others, e.g. #19) have noticed trace back to this root cause:
Overly permissive skill trigger rules
Because each skill's SKILL.md frontmatter is the sole filter and there's no upstream sanity check, skills have drifted into patterns that match far too aggressively:
workflow (skills/workflow/SKILL.md): priority: 9, minScore: 4 (below default 6), importPatterns include bare 'workflow' — matches any from 'workflow' import in any codebase — and '*workflow*' glob. phrases include generic items like \"multi-step pipeline\", \"processing pipeline\", \"content pipeline\", \"state machine\", \"wait for webhook\", \"human in the loop\", \"workflow stuck\". A single phrase match scores 6, which is already well above the minScore of 4.
verification (skills/verification/SKILL.md): priority: 7, bashPatterns match any dev-server start (\\bnpm\\s+run\\s+dev\\b, \\bnext\\s+dev\\b, \\bvite\\s*(dev)?\\b). promptSignals.anyOf: [\"verify\", \"verification\", \"end-to-end\", \"full flow\", \"works\", \"working\"] — the literal English word \"working\" contributes +1 toward a mandatory skill firing. And: classifyTroubleshootingIntent in prompt-patterns.mjs:171-203 force-routes verification on any prompt matching FLOW_VERIFICATION_RE, STUCK_INVESTIGATION_RE, BROWSER_ONLY_RE (words like "stuck", "timeout", "almost works", "blank page"), bypassing the normal scoring pipeline entirely. Called unconditionally from user-prompt-submit-skill-inject.mjs:584-611.
vercel-sandbox (skills/vercel-sandbox/SKILL.md): promptSignals.anyOf: [\"sandbox\", \"isolated\", \"isolation\", \"untrusted\", \"safely\", \"microvm\", \"ffmpeg\", \"playground\"] with broad allOf pairs like [sandbox, code], [isolated, environment]. "Sandbox" and "isolated" are generic industry terms (Docker sandbox, Python sandbox, Figma sandbox, Whop sandbox, game dev sandbox, etc.) and will match almost any conversation touching testing or isolation concepts.
deployments-cicd (skills/deployments-cicd/SKILL.md): pathPatterns include .github/workflows/*.yml, .gitlab-ci.yml, bitbucket-pipelines.yml, apps/*/vercel.json. Any repo with a GitHub Actions workflow file triggers this on every Read/Edit/Write of that file, with no check that the workflow actually deploys to Vercel.
Pathway 1 lexical fallback over-matching
lexical-index.mjs uses stemmed minisearch over all skill bodies. Generic developer words (workflow, verification, sandbox, testing, function, deploy, cache, edge) match their respective skills. The raw scores get capped at +4.0 but are enough to push skills over DEFAULT_PROMPT_MIN_SCORE = 6 when combined with any other match. Only disabled by the undocumented env var VERCEL_PLUGIN_LEXICAL_PROMPT=0 (user-prompt-submit-skill-inject.mjs:555).
Below-threshold matches leak into context
Even when a skill fails its minScore, the banner output still lists it as matched: below threshold: score X < Y. This diagnostic text reaches the model's context window regardless, implying a match that didn't actually happen and polluting the injection metadata comment.
Per-session dedup only
VERCEL_PLUGIN_SEEN_SKILLS (patterns.mjs + pretooluse-skill-inject.mjs:315, user-prompt-submit-skill-inject.mjs:443) is session-scoped. Every new Claude Code session re-injects the same skills against the same user in the same repo. Over days of work the same skills re-fire repeatedly on unrelated prompts. Additionally VERCEL_PLUGIN_CONTEXT_COMPACTED=true (patterns.mjs:223-248) re-injects priority ≥7 skills after context compaction — actively re-surfacing workflow (priority 9) and verification (priority 7) mid-session.
Dead code
hooks/unified-ranker.mjs (31 lines) defines a rankSkills function that additively combines pathPoints + commandPoints + importPoints + profilerPoints + promptPoints + lexicalPoints*1.35 + priorityPoints. It is not imported by any hook. Pathway 2 uses rankEntries from patterns.mjs (priority-only) and pathway 1 uses sortPromptScoreStates. The unused scorer sitting next to the real one suggests internal drift in the hook layer.
Observed symptoms (field report)
All of the following observed in a single ~3-hour Claude Code session working on a Next.js + Supabase app (legitimately a Vercel-deployed project, so the plugin should be helpful) but discussing entirely unrelated topics — a Whop App Store submission, the Whop developer sandbox, and plugin debugging.
Turn A — user asks about Whop's developer sandbox (sandbox.whop.com):
[vercel-plugin] Best practices auto-suggested based on prompt analysis:
- \"verification\" matched: below threshold: score 4 < 6 (allOf [test, end, end] +4);
lexical recall (raw 8286.1, capped +4.0, source: lexical)
- \"vercel-sandbox\" matched: phrase \"sandbox\" +6; anyOf \"sandbox\" +1
**MANDATORY: Your training data for these libraries is OUTDATED and UNRELIABLE.** ...
You must run the Skill(verification) tool.
You must run the Skill(vercel-sandbox) tool.
The user was discussing Whop's sandbox, not Vercel Sandbox.
Turn B — user asks me to debug the plugin's own behavior (meta-debugging):
- \"workflow\" matched: below threshold: score 1 < 4 (anyOf \"trigger\" +1);
lexical recall (raw 275.5, capped +4.0, source: lexical)
The word "workflow" appeared in the context of Claude Code hook workflows, not Vercel Workflow DevKit.
Turn C — I ran gh issue create --repo vercel/vercel-plugin ... to file the original version of this bug report. The body of the issue contained the literal string vercel deploy inside a workaround example. The PreToolUse/Bash hook fired on that string:
[vercel-plugin] Best practices auto-suggested based on detected patterns:
- \"deployments-cicd\" matched full pattern `\\bvercel\\s+deploy\\b` on Bash:
gh issue create --repo vercel/vercel-plugin --title \"Bug: UserPromptSubmit ...
You must run the Skill(deployments-cicd) tool.
The plugin injected deployments-cicd while I was filing a bug report about cross-triggering, because the body of the bug report contained a word it recognized. This is the symptom that made me realize I was looking at multiple independent pathways with a shared root cause, not a single pathway problem.
Fix recommendation
Must-have (single edit, fixes 3 of 5 pathways)
Convert likelySkills from an additive boost into a hard allow-list gate at the loadSkills() chokepoint. Specifically:
- In
hooks/session-start-profiler.mjs profileProject() (:94-121), additionally export VERCEL_PLUGIN_PROJECT_SCOPE with value \"vercel\" when any Vercel marker is detected, \"none\" when the scan finds nothing.
- In
hooks/pretooluse-skill-inject.mjs loadSkills() (:143), after building the map, if process.env.VERCEL_PLUGIN_PROJECT_SCOPE === \"none\" filter the returned skill map down to an always-on set (e.g. [\"knowledge-update\", \"bootstrap\"]) before returning. This automatically propagates to pathways 1, 2, and 5 because they all go through loadSkills.
- Add
VERCEL_PLUGIN_FORCE_ALL_SKILLS=1 as an escape hatch for Vercel power users who want the current behavior regardless of repo state.
Must-have (second edit, fixes pathway 4)
In hooks/inject-claude-md.mjs main() (:55), gate the vercel.md dump on the same VERCEL_PLUGIN_PROJECT_SCOPE check. Early-return if the scope is \"none\".
Should-have
- Move below-threshold diagnostic output to debug-only logging —
user-prompt-submit-skill-inject.mjs:474-519 formatOutput. Below-threshold matches should never reach the user-visible banner.
- Expose
DEFAULT_PROMPT_MIN_SCORE as an env var (user-prompt-submit-skill-inject.mjs:36) so users can tune without turning off entire pathways.
- Document the env var surface.
VERCEL_PLUGIN_LEXICAL_PROMPT, VERCEL_PLUGIN_HOOK_DEDUP, VERCEL_PLUGIN_SEEN_SKILLS, VERCEL_PLUGIN_LIKELY_SKILLS, VERCEL_PLUGIN_DISABLE_CLAUDE_MD_INJECTION (if adopted), etc. are real knobs that users need when the defaults misfire. They should be in the README.
- Persist dedup state per-repo, not per-session.
VERCEL_PLUGIN_SEEN_SKILLS should write to a file under .claude/vercel-plugin-seen.json in the repo, so a skill the user has declined once stays declined.
Nice-to-have — skill-level tightening
These are symptoms of the root cause but are independently worth fixing:
workflow: raise minScore from 4 back to the default 6; drop bare 'workflow' and '*workflow*' from importPatterns / pathPatterns; trim generic phrases (\"state machine\", \"human in the loop\", \"multi-step pipeline\") from promptSignals.phrases.
verification: remove \"works\" and \"working\" from anyOf; make classifyTroubleshootingIntent consult VERCEL_PLUGIN_PROJECT_SCOPE before force-routing.
vercel-sandbox: require co-presence of a Vercel-specific signal (@vercel/sandbox import) before firing on generic "sandbox" / "isolated" mentions.
deployments-cicd: scope .github/workflows/*.yml path match to additionally require the workflow file to reference vercel (grep the file contents) or require a sibling vercel.json.
- Remove
hooks/unified-ranker.mjs or wire it in. Dead code next to live code invites regressions.
Workaround for affected users
Until a fix ships, add to your shell profile (~/.zshenv on macOS/Linux):
# Partial mitigation for vercel/vercel-plugin#38
# Silences pathway 1 lexical fallback. Does NOT silence pattern matches,
# classifyTroubleshootingIntent force-routing, pathway 2, or pathway 4.
export VERCEL_PLUGIN_LEXICAL_PROMPT=0
# Pre-populates the per-session dedup list so these skills are treated as
# \"already injected\" in pathways 1/2/5. Add any skill you've seen misfire.
export VERCEL_PLUGIN_SEEN_SKILLS=\"verification,vercel-sandbox,workflow,deployments-cicd,knowledge-update,nextjs,vercel-cli,vercel-functions,ai-sdk,ai-gateway,marketplace\"
# LOCAL PATCH required: pathway 4 (inject-claude-md.mjs) has no env-var gate.
# To silence the unconditional 46 KB vercel.md SessionStart dump, patch the
# installed plugin:
#
# ~/.claude/plugins/cache/claude-plugins-official/vercel/<version>/hooks/inject-claude-md.mjs
#
# Add this at the top of main():
#
# if (process.env.VERCEL_PLUGIN_DISABLE_CLAUDE_MD_INJECTION === \"1\") return;
#
# Then set:
export VERCEL_PLUGIN_DISABLE_CLAUDE_MD_INJECTION=1
This gets you partial relief. The full fix has to be upstream because pathway 2 (PreToolUse pattern matching) has no env-var escape hatch at all.
Willing to contribute
I have full read-through of the hooks layer and a clear mental model of the architecture. Happy to contribute a PR implementing the VERCEL_PLUGIN_PROJECT_SCOPE gate at the loadSkills chokepoint + the inject-claude-md pathway-4 gate if the team is open to that approach. Would appreciate a maintainer weighing in on:
- Is the additive-boost-instead-of-veto behavior intentional (on the theory that some plugin context is better than none even in non-Vercel repos), or is it an oversight?
- If intentional, what's the right escape hatch for users in mixed-use or non-Vercel repos?
- If unintentional, is there any objection to filtering at
loadSkills() vs. at each pathway site?
TL;DR
Project context in this plugin is advisory, not authoritative. The profiler detects whether the current working directory is a Vercel project and writes
VERCEL_PLUGIN_LIKELY_SKILLS, but that signal is wired as an additive score boost (+3/+5), never as a veto gate. As a result, every skill's trigger rules get evaluated on every prompt and every tool call in every repo, regardless of whether Vercel is in scope — and because individual skills define very permissive trigger rules (common English words, generic bash commands, generic file patterns), injection fires constantly in sessions that have nothing to do with Vercel.This manifests as five independent injection pathways all firing on false positives, not one. Narrow pathway-specific fixes (tightening one skill's patterns, adding a stopword to the lexical index) will not fix the underlying problem because the other pathways will still fire.
Plugin version:
0.25.2(commitf7814a911ef2)Related: #19 (overly generic path/bash patterns in the
nextjsskill — same class of bug, different symptom).The five injection pathways
UserPromptSubmithooks/user-prompt-submit-skill-inject.mjspromptSignals(phrases/allOf/anyOf/noneOf) + minisearch lexical fallback vialexical-index.mjsPreToolUse(Read/Edit/Write/Bash)hooks/pretooluse-skill-inject.mjspathPatterns/bashPatterns/importPatternsper skillSessionStartprofilerhooks/session-start-profiler.mjsVERCEL_PLUGIN_LIKELY_SKILLSSessionStartCLAUDE.md injecthooks/inject-claude-md.mjsSubagentStarthooks/subagent-start-bootstrap.mjsmatchPromptWithReasonon the subagent launch prompt + reads likelySkills cacheAll five are evaluated independently. A user in a non-Vercel repo triggers all of them.
Pathway 4 specifically
inject-claude-md.mjs:55-65dumps the entirevercel.mdfile (~46 KB) into the session context on every SessionStart viaadditionalContext, with no matcher, no project detection, and no opt-out env var. There is no gate on whether the current directory has avercel.json, a@vercel/*dependency, anext.config.*, or any other Vercel marker. Every session start in every repo pays ~12k tokens for this dump before the user has typed anything.The shared chokepoint
loadSkills()inpretooluse-skill-inject.mjs:143is the single function that pathways 1, 2, and 5 all flow through to get the skill map from disk. Because every pathway calls this same function, a project-context filter at this one site would flow transparently to three of the five pathways.Consumers of
loadSkills:user-prompt-submit-skill-inject.mjs:16,541subagent-start-bootstrap.mjs:10,59,143,175pretooluse-skill-inject.mjsitself (pathway 2)Root cause
Project context is advisory, not authoritative.
VERCEL_PLUGIN_LIKELY_SKILLSis consumed in exactly three places, all additive:pretooluse-skill-inject.mjs:340-354—+5priority boost indeduplicateSkills. Does not filter.prompt-analysis.mjs:60— requireslikelySkills.has(skill)only for the narrow "exact score ≤ 0 but lexical hit in top-5" branch. The main branch at:70-77still permits lexical boosts for non-likely skills.user-prompt-submit-skill-inject.mjs:220-233—applyProjectContextBoostadds+3to matched-skill scores.subagent-start-bootstrap.mjs:128-129useslikelySkillsonly to print a status line. If the list is empty it literally prints\"unknown stack\"and still proceeds to inject every skill.isVercelJsonPath/resolveVercelJsonSkills(vercel-config.mjs) only re-ranks pathway 2 once the user has already openedvercel.json— so it can never prevent injection in a non-Vercel repo.In a repo with no Vercel markers at all, the plugin still evaluates the full pattern matcher against every prompt and every tool call. The only effect of the profiler's negative result is that matched skills don't get an extra +3/+5 boost. They can still cross thresholds on their own because those thresholds are set by the skills themselves.
Downstream consequences (with receipts)
Each of the "individual bugs" I (and others, e.g. #19) have noticed trace back to this root cause:
Overly permissive skill trigger rules
Because each skill's
SKILL.mdfrontmatter is the sole filter and there's no upstream sanity check, skills have drifted into patterns that match far too aggressively:workflow(skills/workflow/SKILL.md):priority: 9,minScore: 4(below default 6),importPatternsinclude bare'workflow'— matches anyfrom 'workflow'import in any codebase — and'*workflow*'glob.phrasesinclude generic items like\"multi-step pipeline\",\"processing pipeline\",\"content pipeline\",\"state machine\",\"wait for webhook\",\"human in the loop\",\"workflow stuck\". A single phrase match scores 6, which is already well above theminScoreof 4.verification(skills/verification/SKILL.md):priority: 7,bashPatternsmatch any dev-server start (\\bnpm\\s+run\\s+dev\\b,\\bnext\\s+dev\\b,\\bvite\\s*(dev)?\\b).promptSignals.anyOf: [\"verify\", \"verification\", \"end-to-end\", \"full flow\", \"works\", \"working\"]— the literal English word\"working\"contributes +1 toward a mandatory skill firing. And:classifyTroubleshootingIntentinprompt-patterns.mjs:171-203force-routesverificationon any prompt matchingFLOW_VERIFICATION_RE,STUCK_INVESTIGATION_RE,BROWSER_ONLY_RE(words like "stuck", "timeout", "almost works", "blank page"), bypassing the normal scoring pipeline entirely. Called unconditionally fromuser-prompt-submit-skill-inject.mjs:584-611.vercel-sandbox(skills/vercel-sandbox/SKILL.md):promptSignals.anyOf: [\"sandbox\", \"isolated\", \"isolation\", \"untrusted\", \"safely\", \"microvm\", \"ffmpeg\", \"playground\"]with broadallOfpairs like[sandbox, code],[isolated, environment]. "Sandbox" and "isolated" are generic industry terms (Docker sandbox, Python sandbox, Figma sandbox, Whop sandbox, game dev sandbox, etc.) and will match almost any conversation touching testing or isolation concepts.deployments-cicd(skills/deployments-cicd/SKILL.md):pathPatternsinclude.github/workflows/*.yml,.gitlab-ci.yml,bitbucket-pipelines.yml,apps/*/vercel.json. Any repo with a GitHub Actions workflow file triggers this on every Read/Edit/Write of that file, with no check that the workflow actually deploys to Vercel.Pathway 1 lexical fallback over-matching
lexical-index.mjsuses stemmed minisearch over all skill bodies. Generic developer words (workflow,verification,sandbox,testing,function,deploy,cache,edge) match their respective skills. The raw scores get capped at +4.0 but are enough to push skills overDEFAULT_PROMPT_MIN_SCORE = 6when combined with any other match. Only disabled by the undocumented env varVERCEL_PLUGIN_LEXICAL_PROMPT=0(user-prompt-submit-skill-inject.mjs:555).Below-threshold matches leak into context
Even when a skill fails its
minScore, the banner output still lists it asmatched: below threshold: score X < Y. This diagnostic text reaches the model's context window regardless, implying a match that didn't actually happen and polluting the injection metadata comment.Per-session dedup only
VERCEL_PLUGIN_SEEN_SKILLS(patterns.mjs+pretooluse-skill-inject.mjs:315,user-prompt-submit-skill-inject.mjs:443) is session-scoped. Every new Claude Code session re-injects the same skills against the same user in the same repo. Over days of work the same skills re-fire repeatedly on unrelated prompts. AdditionallyVERCEL_PLUGIN_CONTEXT_COMPACTED=true(patterns.mjs:223-248) re-injects priority ≥7 skills after context compaction — actively re-surfacingworkflow(priority 9) andverification(priority 7) mid-session.Dead code
hooks/unified-ranker.mjs(31 lines) defines arankSkillsfunction that additively combinespathPoints + commandPoints + importPoints + profilerPoints + promptPoints + lexicalPoints*1.35 + priorityPoints. It is not imported by any hook. Pathway 2 usesrankEntriesfrompatterns.mjs(priority-only) and pathway 1 usessortPromptScoreStates. The unused scorer sitting next to the real one suggests internal drift in the hook layer.Observed symptoms (field report)
All of the following observed in a single ~3-hour Claude Code session working on a Next.js + Supabase app (legitimately a Vercel-deployed project, so the plugin should be helpful) but discussing entirely unrelated topics — a Whop App Store submission, the Whop developer sandbox, and plugin debugging.
Turn A — user asks about Whop's developer sandbox (
sandbox.whop.com):The user was discussing Whop's sandbox, not Vercel Sandbox.
Turn B — user asks me to debug the plugin's own behavior (meta-debugging):
The word "workflow" appeared in the context of Claude Code hook workflows, not Vercel Workflow DevKit.
Turn C — I ran
gh issue create --repo vercel/vercel-plugin ...to file the original version of this bug report. The body of the issue contained the literal stringvercel deployinside a workaround example. ThePreToolUse/Bashhook fired on that string:The plugin injected
deployments-cicdwhile I was filing a bug report about cross-triggering, because the body of the bug report contained a word it recognized. This is the symptom that made me realize I was looking at multiple independent pathways with a shared root cause, not a single pathway problem.Fix recommendation
Must-have (single edit, fixes 3 of 5 pathways)
Convert
likelySkillsfrom an additive boost into a hard allow-list gate at theloadSkills()chokepoint. Specifically:hooks/session-start-profiler.mjsprofileProject()(:94-121), additionally exportVERCEL_PLUGIN_PROJECT_SCOPEwith value\"vercel\"when any Vercel marker is detected,\"none\"when the scan finds nothing.hooks/pretooluse-skill-inject.mjsloadSkills()(:143), after building the map, ifprocess.env.VERCEL_PLUGIN_PROJECT_SCOPE === \"none\"filter the returned skill map down to an always-on set (e.g.[\"knowledge-update\", \"bootstrap\"]) before returning. This automatically propagates to pathways 1, 2, and 5 because they all go throughloadSkills.VERCEL_PLUGIN_FORCE_ALL_SKILLS=1as an escape hatch for Vercel power users who want the current behavior regardless of repo state.Must-have (second edit, fixes pathway 4)
In
hooks/inject-claude-md.mjsmain()(:55), gate thevercel.mddump on the sameVERCEL_PLUGIN_PROJECT_SCOPEcheck. Early-return if the scope is\"none\".Should-have
user-prompt-submit-skill-inject.mjs:474-519formatOutput. Below-threshold matches should never reach the user-visible banner.DEFAULT_PROMPT_MIN_SCOREas an env var (user-prompt-submit-skill-inject.mjs:36) so users can tune without turning off entire pathways.VERCEL_PLUGIN_LEXICAL_PROMPT,VERCEL_PLUGIN_HOOK_DEDUP,VERCEL_PLUGIN_SEEN_SKILLS,VERCEL_PLUGIN_LIKELY_SKILLS,VERCEL_PLUGIN_DISABLE_CLAUDE_MD_INJECTION(if adopted), etc. are real knobs that users need when the defaults misfire. They should be in the README.VERCEL_PLUGIN_SEEN_SKILLSshould write to a file under.claude/vercel-plugin-seen.jsonin the repo, so a skill the user has declined once stays declined.Nice-to-have — skill-level tightening
These are symptoms of the root cause but are independently worth fixing:
workflow: raiseminScorefrom 4 back to the default 6; drop bare'workflow'and'*workflow*'fromimportPatterns/pathPatterns; trim generic phrases (\"state machine\",\"human in the loop\",\"multi-step pipeline\") frompromptSignals.phrases.verification: remove\"works\"and\"working\"fromanyOf; makeclassifyTroubleshootingIntentconsultVERCEL_PLUGIN_PROJECT_SCOPEbefore force-routing.vercel-sandbox: require co-presence of a Vercel-specific signal (@vercel/sandboximport) before firing on generic "sandbox" / "isolated" mentions.deployments-cicd: scope.github/workflows/*.ymlpath match to additionally require the workflow file to referencevercel(grep the file contents) or require a siblingvercel.json.hooks/unified-ranker.mjsor wire it in. Dead code next to live code invites regressions.Workaround for affected users
Until a fix ships, add to your shell profile (
~/.zshenvon macOS/Linux):This gets you partial relief. The full fix has to be upstream because pathway 2 (PreToolUse pattern matching) has no env-var escape hatch at all.
Willing to contribute
I have full read-through of the hooks layer and a clear mental model of the architecture. Happy to contribute a PR implementing the
VERCEL_PLUGIN_PROJECT_SCOPEgate at theloadSkillschokepoint + theinject-claude-mdpathway-4 gate if the team is open to that approach. Would appreciate a maintainer weighing in on:loadSkills()vs. at each pathway site?