A GitHub Actions pipeline that detects, gates, and audits AI-generated code before it reaches your main branch. Policy-as-code enforcement, automated security scanning, sandboxed test execution, and risk-tiered review requirements — all in modular composite actions you can adopt incrementally.
PR opened → Detect AI PR → Policy Check → Security Scan → Sandbox Tests → Risk Assessment
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Co-author? Allowed files? Gitleaks Docker build Score 0-100
Labels? Blocked files? Semgrep npm test LOW/MED/HIGH
Bot author? Scope limits? npm audit --network=none PR comment
Each stage runs as an independent composite action — use the full pipeline or pick individual actions.
- Fork this repo and enable GitHub Actions
- Create a branch and add a commit:
git commit --allow-empty -m "test: verify pipeline Co-Authored-By: Claude <noreply@anthropic.com>"
- Open a pull request — the
ai-code-gateworkflow triggers automatically - Observe: detection → policy check → security scan → sandbox test → risk assessment
- Check the PR comment for your risk score and tier
Create .github/workflows/ai-code-gate.yml:
name: AI Code Gate
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
detect:
runs-on: ubuntu-latest
outputs:
is_ai_pr: ${{ steps.detect.outputs.is_ai_pr }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: InkByteStudio/ai-code-gate/.github/actions/detect-ai-pr@main
id: detect
policy-check:
needs: detect
if: needs.detect.outputs.is_ai_pr == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
- uses: InkByteStudio/ai-code-gate/.github/actions/policy-check@main
security-scan:
needs: detect
if: needs.detect.outputs.is_ai_pr == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: InkByteStudio/ai-code-gate/.github/actions/security-scan@mainThen add an .ai-code-gate.yml to your repo root (see examples/).
See docs/policy-reference.md for the complete reference.
policy:
allowed_patterns:
- "src/**/*.ts"
- "tests/**"
blocked_patterns:
- "*.env*"
- "**/auth/**"
scope_limits:
max_files: 20
max_lines_added: 500| Action | Purpose | Key outputs |
|---|---|---|
detect-ai-pr |
Identify AI-generated PRs | is_ai_pr, agent_identity |
policy-check |
Validate changes against policy | policy_passed, violations_json |
security-scan |
Run gitleaks + Semgrep + dep audit | scan_passed, findings_count |
sandbox-test |
Run tests in isolated Docker container | tests_passed, test_output |
risk-assessment |
Calculate risk score, post PR comment | risk_score, risk_tier |
Each action can be used independently: uses: InkByteStudio/ai-code-gate/.github/actions/<action>@main
| Tier | Score | Default behavior |
|---|---|---|
| LOW | 0–30 | Auto-merge eligible, 0 approvals |
| MEDIUM | 31–70 | 1 approval required |
| HIGH | 71–100 | 2 approvals + security team review |
See docs/risk-tiers.md for score calculation details.
npm install
npm test # Run all tests
npm run build # Compile TypeScriptThe sample-app/ directory contains a minimal Express API used to demonstrate the pipeline.
- Fork the repo
- Create a feature branch
- Make your changes (the pipeline will gate your PR if you trigger AI detection)
- Open a pull request