Validate GitHub event content for prompt injection before AI agents process it.
This action reads the content from GitHub events (issue titles, PR descriptions, comments) and validates it against the LLMSecure API. If the input is detected as UNSAFE, the step fails with exit code 1, preventing subsequent steps (like AI agents) from running.
No commenting, no labeling, no GitHub API calls. Just a gate: SAFE passes, UNSAFE blocks.
issues(title + body)issue_comment(comment body)pull_request(title + body)pull_request_review_comment(comment body)
- Get an API key at llmsecure.io
- Add
LLMSECURE_API_KEYto your repository secrets (Settings > Secrets and variables > Actions) - Add the validation step before your AI action in your workflow
name: AI Issue Triage (Protected)
on:
issues:
types: [opened, edited]
jobs:
triage:
runs-on: ubuntu-latest
steps:
# Validate input for prompt injection
- name: LLMSecure Scan
id: security
uses: llmsecure/validate-action@v1
with:
api-key: ${{ secrets.LLMSECURE_API_KEY }}
# Only runs if LLMSecure passed (input is SAFE)
- name: AI Triage
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}| Input | Required | Default | Description |
|---|---|---|---|
api-key |
Yes | LLMSecure API key | |
api-url |
No | https://api.llmsecure.io |
LLMSecure API URL |
| Output | Description |
|---|---|
result |
SAFE or UNSAFE |
score |
Risk score (0-100) |
This action sends the text it scans — the title and body of the triggering issue, pull request, or comment — over HTTPS to the LLMSecure API (https://api.llmsecure.io by default). That text is scanned for prompt-injection and AI-agent-manipulation patterns, and the classification result is returned to the action. No GitHub tokens, repository metadata, or commit contents are transmitted.
- Data sent: the text fields extracted from the GitHub event (issue/PR/comment title + body).
- Retention & usage: see the LLMSecure Privacy Policy for retention, access, and deletion details.
- Self-hosting: you can point the action at a self-hosted LLMSecure deployment by overriding the
api-urlinput.
If you're subject to GDPR, CCPA, or similar regulations and your repository receives issues or PRs containing personal data from contributors, ensure your project's privacy notice discloses that issue/PR text is transmitted to LLMSecure for scanning.
MIT — see LICENSE.