Why not:
instead of:
Generate visual confidence badges and dashboards from aggregated code metrics.
confvis transforms JSON or YAML confidence reports into SVG gauge badges and HTML dashboards, in order to more easily visualize code quality, test coverage, security scores, or any metric you track.
- uses: boinger/confvis@v1
with:
config: confidence.json
output: badge.svgSee GitHub Action Documentation for all options.
go install github.com/boinger/confvis/cmd/confvis@latestOr build from source:
git clone https://github.com/boinger/confvis.git
cd confvis
go build -o confvis ./cmd/confvisconfvis pulls metrics from tools you already use:
# Fetch coverage from Codecov
export CODECOV_TOKEN=your_token
confvis fetch codecov -p owner/repo -o coverage.json
# Fetch code quality from SonarQube (self-hosted or SaaS)
export SONARQUBE_URL=https://sonar.example.com
export SONARQUBE_TOKEN=squ_xxx
confvis fetch sonarqube -p myproject -o quality.json
# Aggregate with weights and generate badge + dashboard
confvis aggregate -c coverage.json:60 -c quality.json:40 -o ./outputOther integrations: GitHub Actions, Snyk, Trivy—see Sources.
Each fetched report contains:
{
"title": "Code Coverage",
"score": 87,
"threshold": 80,
"factors": [
{"name": "Line Coverage", "score": 89, "weight": 70},
{"name": "Branch Coverage", "score": 82, "weight": 30}
]
}- score: The metric value (0-100), auto-calculated from weighted factors
- threshold: Minimum acceptable score—badge shows pass/fail status
- factors: Breakdown of contributing metrics with weights
The aggregate command (from Step 1) combines multiple reports into a weighted overall score. See Schema Reference for the full specification.
Custom metrics? Create your own JSON/YAML for metrics confvis doesn't fetch directly. Or write a new module (and send me the PR, please)!
Create a .confvis.yaml to set defaults and avoid repetitive flags:
gauge:
style: github
fail_under: 80
badge_type: gauge
sources:
sonarqube:
url: https://sonar.example.com
snyk:
org: my-org-idConfig is loaded from .confvis.yaml in the current directory or ~/.config/confvis/. Precedence: config < environment < flags.
See CLI Reference for full documentation.
Use --fail-under to enforce minimum scores, or --fail-on-regression to detect quality degradation:
# Fail the build if score drops below 75
confvis gauge -c confidence.json -o badge.svg --fail-under 75
# Save baseline on main branch (stored in git ref, no files needed)
confvis baseline save -c confidence.json
# Compare against stored baseline on PRs
confvis gauge -c confidence.json --compare-baseline --fail-on-regression -o badge.svg
# Or compare against a specific baseline file
confvis gauge -c confidence.json --compare baseline.json --fail-on-regression -o badge.svg
# Quiet mode for clean CI logs
confvis generate -c confidence.json -o ./output --fail-under 75 -qSupports stdin/stdout for pipeline workflows:
# Pipe from another tool
metrics-tool export | confvis gauge -c - -o badge.svg
# Write directly to stdout
confvis gauge -c confidence.json -o - > badge.svgconfvis can fetch metrics directly from external systems:
# Fetch from SonarQube (code quality)
export SONARQUBE_URL=https://sonar.example.com
export SONARQUBE_TOKEN=squ_xxx
confvis fetch sonarqube -p myproject -o confidence.json
# Fetch from Codecov (coverage)
export CODECOV_TOKEN=xxx
confvis fetch codecov -p myorg/myrepo -o confidence.json
# Fetch from GitHub Actions (CI/CD)
export GITHUB_TOKEN=xxx
confvis fetch github-actions -p myorg/myrepo -o confidence.json
# Fetch from Snyk (security)
export SNYK_TOKEN=xxx
confvis fetch snyk --org my-org-id -p my-project-id -o confidence.json
# Fetch from Trivy (local security scan)
confvis fetch trivy -p . -o security.json
# Pipe directly to badge generation
confvis fetch sonarqube -p myproject -o - | confvis gauge -c - -o badge.svgSee Sources Documentation for details on available sources and their configuration.
Fetch metrics from an external source.
confvis fetch <source> -p <project> -o <output> [source-specific-flags]Supported sources: codecov, dependabot, github-actions, grype, semgrep, snyk, sonarqube, trivy
Generate both an SVG badge and HTML dashboard.
confvis generate -c confidence.json -o ./output [--dark]Creates:
output/badge.svg- SVG gauge badgeoutput/dashboard/index.html- Interactive HTML dashboard
Generate a gauge badge in various formats.
confvis gauge -c confidence.json -o badge.svg [--format svg|json|text|markdown|github-comment] [--badge-type gauge|flat] [--style github|minimal|corporate|high-contrast] [--dark]Output formats:
svg(default): SVG gauge badge imagejson: Score metadata as JSONtext: Just the score number (for scripting)markdown: Markdown table for PR commentsgithub-comment: GitHub-flavored markdown with emoji and collapsible sections
Badge types:
gauge(default): Semi-circular gaugeflat: Shields.io-compatible rectangular badge (supports--iconfor SVG path data)sparkline: Trend line showing score history (use--history-autoto persist automatically)
Example sparkline (this repo's score trend):
Color styles: github (default), minimal, corporate, high-contrast
Aggregate multiple reports into a single dashboard with weighted scores.
# Aggregate multiple reports
confvis aggregate -c api/confidence.json -c web/confidence.json -o ./output
# With custom weights
confvis aggregate -c api/confidence.json:60 -c web/confidence.json:40 -o ./output
# Using glob patterns (monorepo)
confvis aggregate -c "services/*/confidence.json" -o ./outputCreates:
output/badge.svg- Aggregate SVG gauge badgeoutput/dashboard/index.html- Multi-report dashboard with all componentsoutput/<report-title>.svg- Individual badges for each report
Use --fragment to generate an embeddable HTML fragment (no DOCTYPE wrapper) for Confluence or other systems.
See examples/dashboard for a working example with embedding instructions.
Manage baselines for regression detection in CI/CD.
# Save current score as baseline (stored in git ref by default)
confvis baseline save -c confidence.json
# Show current baseline
confvis baseline show
# Save to file instead of git ref
confvis baseline save -c confidence.json --file baseline.jsonUse --compare-baseline with confvis gauge to automatically fetch and compare against the stored baseline.
Create check runs on CI platforms directly from confidence reports.
# Auto-detect from GitHub Actions environment
confvis check github -c confidence.json
# Explicit options
confvis check github -c confidence.json \
--owner myorg --repo myrepo --sha abc123 \
--token $GITHUB_TOKEN
# Custom check name
confvis check github -c confidence.json --name "Code Quality"In GitHub Actions, most options are auto-detected from environment variables. Requires checks: write permission.
| Field | Type | Required | Description |
|---|---|---|---|
title |
string | Yes | Report title |
score |
int | No* | Overall score (0-100), auto-calculated if omitted |
threshold |
int | Yes | Minimum passing score |
description |
string | No | Report description |
thresholds |
object | No | Custom color thresholds (greenAbove, yellowAbove) |
factors |
array | No | Breakdown of contributing factors |
*Score is auto-calculated as a weighted average when omitted and factors are present.
Each factor:
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Factor name |
score |
int | Yes | Factor score (0-100) |
weight |
int | Yes | Weight in overall calculation |
description |
string | No | Factor description |
url |
string | No | Link to detailed report (clickable in dashboard) |
- GitHub Action
- Installation Guide
- CLI Reference
- Schema Reference
- Integration Guide
- External Sources
- Architecture
See the examples/ directory for:
- GitHub Actions workflow
- Makefile integration
- Multi-source score aggregation
MIT - see LICENSE