Skip to content

llmsecure/validate-action

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LLMSecure Validate GitHub Action

Validate GitHub event content for prompt injection before AI agents process it.

What it does

This action reads the content from GitHub events (issue titles, PR descriptions, comments) and validates it against the LLMSecure API. If the input is detected as UNSAFE, the step fails with exit code 1, preventing subsequent steps (like AI agents) from running.

No commenting, no labeling, no GitHub API calls. Just a gate: SAFE passes, UNSAFE blocks.

Supported events

  • issues (title + body)
  • issue_comment (comment body)
  • pull_request (title + body)
  • pull_request_review_comment (comment body)

Setup

  1. Get an API key at llmsecure.io
  2. Add LLMSECURE_API_KEY to your repository secrets (Settings > Secrets and variables > Actions)
  3. Add the validation step before your AI action in your workflow

Example workflow

name: AI Issue Triage (Protected)

on:
  issues:
    types: [opened, edited]

jobs:
  triage:
    runs-on: ubuntu-latest
    steps:
      # Validate input for prompt injection
      - name: LLMSecure Scan
        id: security
        uses: llmsecure/validate-action@v1
        with:
          api-key: ${{ secrets.LLMSECURE_API_KEY }}

      # Only runs if LLMSecure passed (input is SAFE)
      - name: AI Triage
        uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          github_token: ${{ secrets.GITHUB_TOKEN }}

Inputs

Input Required Default Description
api-key Yes LLMSecure API key
api-url No https://api.llmsecure.io LLMSecure API URL

Outputs

Output Description
result SAFE or UNSAFE
score Risk score (0-100)

Data & privacy

This action sends the text it scans — the title and body of the triggering issue, pull request, or comment — over HTTPS to the LLMSecure API (https://api.llmsecure.io by default). That text is scanned for prompt-injection and AI-agent-manipulation patterns, and the classification result is returned to the action. No GitHub tokens, repository metadata, or commit contents are transmitted.

  • Data sent: the text fields extracted from the GitHub event (issue/PR/comment title + body).
  • Retention & usage: see the LLMSecure Privacy Policy for retention, access, and deletion details.
  • Self-hosting: you can point the action at a self-hosted LLMSecure deployment by overriding the api-url input.

If you're subject to GDPR, CCPA, or similar regulations and your repository receives issues or PRs containing personal data from contributors, ensure your project's privacy notice discloses that issue/PR text is transmitted to LLMSecure for scanning.

License

MIT — see LICENSE.

About

Validate GitHub event content for prompt injection before AI agents process it

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages