GauntletCI is a .NET CLI tool that analyzes pull request diffs or as a pre-commit audit that detects behavioral change-risk before code is merged.
It answers one question:
Did this change introduce behavior that is not properly validated?
Even experienced developers miss things in diffs.
Not because they lack skill β but because diffs are deceptive.
A small change can silently alter behavior:
- A new null check changes execution flow
- A guard clause introduces new exceptions
- A method signature changes without test updates
- A dependency call is modified without validation
- A conditional branch shifts logic in subtle ways
These are not syntax errors.
They are behavior changes β and they regularly slip through code review.
GauntletCI exists to catch them before they reach production.
Read the full story behind GauntletCI.
GauntletCI is built on a clear set of principles defined in the GauntletCI Charter:
- Coverage is Not Correctness β Tests prove execution, not survival.
- Falsification Over Verification β We seek to disprove safety, not confirm compliance.
- Intent is Material Context β We cross-reference PR diffs with linked issues to detect semantic drift.
- Privacy is Absolute β All reasoning happens locally; no code ever leaves your machine.
- Determinism Anchors Intelligence β The local AI explains; deterministic Roslyn rules enforce.
- Diff-aware change-risk detector
- Pre-commit / pre-merge safety layer
- Focused on behavior, not style
- Intent-aware β Cross-references implementation with linked issues (GitHub/Jira)
- Not a linter
- Not a test runner
- Not a static analysis replacement
- Not a code formatter
GauntletCI complements your existing tools; it does not replace them.
- Analyzes only what changed in a diff
- Detects unvalidated behavior changes
- Flags missing or weak test coverage
- Identifies execution flow changes (guards, exceptions, branching)
- Surfaces API and contract changes
- Intent Alignment: Compares PR diff with linked GitHub Issue to detect when the implementation drifts from the stated goal.
- Outputs actionable findings with file paths and line numbers
dotnet tool install -g GauntletCI# Analyze staged changes before committing
gauntletci analyze --staged
# Analyze a pull request diff file
gauntletci analyze --diff pr.diff
# Analyze a specific commit
gauntletci analyze --commit abc1234
# Export audit history as CSV
gauntletci audit export --format csv --output report.csv
# Expose GauntletCI to an AI assistant via MCP
gauntletci mcp serve# Analyze staged changes
gauntletci analyze --staged
# Analyze a diff file
gauntletci analyze --diff pr.diff
# Analyze a specific commit
gauntletci analyze --commit abc1234
# Output as JSON
gauntletci analyze --staged --output json
# Emit GitHub Actions annotations
gauntletci analyze --staged --github-annotations
# Enrich high-confidence findings with a locally running Ollama LLM
gauntletci analyze --staged --with-llm
# Attach the closest expert knowledge citation from the local vector store
gauntletci analyze --staged --with-expert-context
# Both LLM enrichment and expert context (requires Ollama + seeded vector store)
gauntletci analyze --staged --with-llm --with-expert-contextEvery scan is automatically logged to ~/.gauntletci/audit-log.ndjson.
# Export full audit log as JSON
gauntletci audit export
# Export as CSV
gauntletci audit export --format csv --output report.csv
# Filter to last 30 scans
gauntletci audit export --last 30
# Filter by date
gauntletci audit export --since 2025-01-01
# Quick summary stats
gauntletci audit statsGauntletCI exposes itself as a Model Context Protocol server. Any MCP-compatible AI assistant (Claude Desktop, Cursor, Copilot, Windsurf) can call GauntletCI tools mid-conversation.
gauntletci mcp serveClaude Desktop config (~/.claude/claude_desktop_config.json):
{
"mcpServers": {
"gauntletci": {
"command": "gauntletci",
"args": ["mcp", "serve"]
}
}
}Available MCP tools:
| Tool | Description |
|---|---|
analyze_staged |
Analyze staged git changes |
analyze_diff |
Analyze a raw diff string |
analyze_commit |
Analyze a specific commit |
list_rules |
List all 42+ analysis rules |
audit_stats |
Aggregate stats from local audit log |
When Ollama is running locally, the MCP server can optionally use RemoteLlmEngine to provide LLM-powered explanations for high-confidence findings inline in your AI assistant conversation.
GauntletCI can build a local vector store of expert .NET facts to attach as citations alongside findings. This requires Ollama running locally with nomic-embed-text pulled.
# Install the embedding model (one time)
ollama pull nomic-embed-text
# Seed 11 hand-curated .NET expert facts into the local vector store
gauntletci llm seed
# Distill expert facts from a GitHub issues JSON file via LLM
gauntletci llm distill --input issues.json
# Limit distillation to the top 50 highest-engagement issues
gauntletci llm distill --input issues.json --max-records 50Vector store location: ~/.gauntletci/expert-embeddings.db
Fetches merged PRs and issues from top .NET OSS contributors for use in the corpus pipeline.
# Fetch from default repos (dotnet/runtime, dotnet/roslyn, etc.)
gauntletci corpus maintainers fetch
# Limit results per label
gauntletci corpus maintainers fetch --max-per-label 20
# Write output to a specific file
gauntletci corpus maintainers fetch --output maintainers.json
# Scope to a specific repository
gauntletci corpus maintainers fetch --repo dotnet/runtimeRequires GITHUB_TOKEN env var for authenticated requests (5000 req/hr vs 60 unauthenticated).
gauntletci initβ Initialize GauntletCI config in your repogauntletci ignoreβ Manage the ignore listgauntletci postmortemβ Run postmortem analysisgauntletci feedbackβ Submit feedback on a findinggauntletci telemetryβ Manage telemetry opt-in/outgauntletci llm seedβ Seed local expert knowledge vector storegauntletci llm distillβ Distill expert facts from GitHub issues via LLMgauntletci corpus maintainers fetchβ Fetch high-signal OSS maintainer PRs
GauntletCI supports optional local LLM enrichment via Ollama. All inference runs on your machine β no data leaves your environment.
# Install Ollama: https://ollama.com
# Pull the embedding model
ollama pull nomic-embed-text
# Seed expert knowledge into the local vector store
gauntletci llm seed# Enrich findings with LLM explanations (Ollama must be running)
gauntletci analyze --staged --with-llm
# Attach expert knowledge citations
gauntletci analyze --staged --with-expert-context
# Both together
gauntletci analyze --staged --with-llm --with-expert-contextWhen --with-expert-context is active, each high-confidence finding is embedded and matched against ~/.gauntletci/expert-embeddings.db. If a match scores above the similarity threshold, the expert citation is shown inline (CLI) and included in GitHub annotations and MCP tool responses.
All local LLM processing is fully offline. Ollama runs locally; nothing is sent to any cloud service.
GauntletCI ships 42 built-in rules (GCI0001βGCI0042) covering:
- Behavioral change detection and goal alignment
- Security risk, PII logging, authorization coverage
- Test coverage and test quality gaps
- Async safety, resource lifecycle, disposable resource management
- Data schema compatibility, idempotency/retry safety
- Observability, structured logging, rollback safety
- Architecture layer discipline and dependency injection safety
See docs/rules.md for the full rule catalogue.
Run gauntletci init to generate a .gauntletci.json config file in your repository root. You can use it to:
- Enable or disable specific rules
- Set confidence thresholds
- Configure ignore patterns
All analysis is local. No code ever leaves your machine β including when using Ollama for local LLM enrichment.
- Telemetry is opt-in and anonymous (no code, no file paths, no content)
- All findings are stored only in
~/.gauntletci/audit-log.ndjson - Ollama runs locally; all LLM inference stays on your machine
See the GauntletCI Charter for the full privacy commitment.
