Evaluate AI-generated code in your CI pipeline. Wraps the runlit CLI as a GitHub Action.
# .github/workflows/runlit.yml
name: runlit eval
on:
pull_request:
types: [opened, synchronize]
jobs:
eval:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Evaluate PR
uses: runlit-dev/action@v1
with:
api-key: ${{ secrets.RUNLIT_API_KEY }}
# Optional overrides:
# block-threshold: "50" # fail if score < 50 (default)
# warn-threshold: "70" # warning annotation if score < 70 (default)| Input | Required | Default | Description |
|---|---|---|---|
api-key |
✅ | — | runlit API key (store in GitHub Secrets) |
block-threshold |
50 |
Score below which the action fails | |
warn-threshold |
70 |
Score below which a warning annotation is added | |
api-url |
https://api.runlit.dev |
Override for self-hosted deployments |
| Output | Description |
|---|---|
score |
Composite eval score (0–100) |
grade |
PASS, WARN, or BLOCK |
eval-id |
UUIDv7 eval identifier for audit trail |
- Action installs the runlit CLI via
go install github.com/runlit-dev/cli@latest - CLI fetches the PR diff from GitHub (using
GITHUB_TOKEN, automatically available in Actions) - Diff is sent to
api.runlit.dev/v1/evalwith your API key - Score and grade are posted as a PR comment by the API server
- Action exits with code
1if grade isBLOCK(fails the check) - Job summary is written with the full signal breakdown
The GitHub App (install at github.com/apps/runlit) is the recommended approach — zero config, works automatically on every PR. The Action is for teams that need CI-level control over when evals run or want to integrate the score into custom workflows.
MIT — see LICENSE.