Hi maintainers,
Opening an issue first to gauge appetite before any code work.
Question
Would maintainers accept a contributed AI check that scans LLM input/output against ATR (Agent Threat Rules) and gates the result as a PR status check, in the same shape as the security review example in the README?
The intended contribution:
- One self-contained markdown check (the same format as
.continue/checks/ examples)
- Optionally shareable via the Continue Hub so other repos can pull it in
- Wraps the ATR rule set (425 open MIT-licensed YAML rules) into the check's evaluation prompt and CI gate
- Does not modify Continue core; purely additive
Why this might be in scope
Agent-threat detection in PRs is a real gap for IDE-assistant deployments, and the README already shows security-flavored checks as a first-class use case. The same kind of rule pack is already running in production at Microsoft Agent Governance Toolkit (PRs #908 and #1277, both merged) and Cisco AI Defense skill-scanner (PRs #79 and #99, both merged), so the detection layer is field-tested. ATR is MIT-licensed throughout, so no IP friction for a check distributed through Continue.
What I would draft if you say yes
- The check markdown file (and a Hub entry if that surface is the right home)
- Integration docs explaining install + CI wiring
- A small test fixture covering the check against 5-10 representative attack samples
Repo for the rules: https://github.com/Agent-Threat-Rule/agent-threat-rules
If you say no
Totally fine. No PR will follow. I will not bump or repost.
Hi maintainers,
Opening an issue first to gauge appetite before any code work.
Question
Would maintainers accept a contributed AI check that scans LLM input/output against ATR (Agent Threat Rules) and gates the result as a PR status check, in the same shape as the security review example in the README?
The intended contribution:
.continue/checks/examples)Why this might be in scope
Agent-threat detection in PRs is a real gap for IDE-assistant deployments, and the README already shows security-flavored checks as a first-class use case. The same kind of rule pack is already running in production at Microsoft Agent Governance Toolkit (PRs #908 and #1277, both merged) and Cisco AI Defense skill-scanner (PRs #79 and #99, both merged), so the detection layer is field-tested. ATR is MIT-licensed throughout, so no IP friction for a check distributed through Continue.
What I would draft if you say yes
Repo for the rules: https://github.com/Agent-Threat-Rule/agent-threat-rules
If you say no
Totally fine. No PR will follow. I will not bump or repost.