kapture is a Kubernetes cluster inspection CLI that collects runtime data and evaluates it against defined policy checks.
The functionality of the CLI is customizable via the Collector model
Collectors are short-lived Kubernetes Jobs that kapture deploys to gather node or cluster-level data that can't be obtained through the standard Kubernetes API alone — things like hugepage info, host-level metrics, or storage state.
Collectors are declared by a Policy Bundle consistung of a metadata.json file that explains specifics about the collector deployment (ie. container image, commands to run, permissions, etc.) and Rego policy file(s) to define checks. These Bundles can be hosted locally or via HTTP. Read more about Collectors here
make tidy
make build
./bin/kapture version
./bin/kapture checks
./bin/kapture scan --output table
./bin/kapture scan --output json
./bin/kapture scan --kubeconfig ~/.kube/config --context my-context
./bin/kapture scan --category production-readiness --severity warning
./bin/kapture scan --check kubevirt-api-availability --exclude-check bootstrap-placeholder
./bin/kapture scan --engine rego
./bin/kapture scan --engine rego --policy-file ./policy/custom.rego
./bin/kapture scan --engine rego --policy-bundle ./policy/bundle
./bin/kapture scan --engine rego --policy-bundle ./policy/baseline
./bin/kapture scan --namespace tenant-a --exclude-namespace tenant-a-shared
./bin/kapture scan --exclude-namespace "openshift-*" --exclude-namespace "cattle-*"
./bin/kapture scan --show-runbook --output table
./bin/kapture runbook
./bin/kapture runbook --id RUNBOOK-SEC-RBAC-001
# Collector workflow — gather node/cluster data then scan with it
./bin/kapture collect --bundle ./policy/baseline --output collector-data.json
./bin/kapture collect --collector-config ./my-collectors.json --output collector-data.json
./bin/kapture scan --engine rego --policy-bundle ./policy/baseline --collector-data collector-data.json
# Remote bundle (HTTPS tarball)
./bin/kapture collect --bundle https://github.com/myorg/policies/archive/refs/tags/v1.2.0.tar.gz --output collector-data.json
./bin/kapture scan --engine rego --policy-bundle https://github.com/myorg/policies/archive/refs/tags/v1.2.0.tar.gz --collector-data collector-data.json
# Remote monorepo (bundle lives under a subdirectory)
./bin/kapture scan --engine rego \
--policy-bundle https://github.com/myorg/policies/archive/refs/tags/v1.2.0.tar.gz \
--bundle-subdir policy/kubevirt --collector-data collector-data.jsonInstall from the project tap:
brew tap phenixblue/tap
brew install kaptureHomebrew formula publishing is handled by GoReleaser on version tags (v*) via .github/workflows/release.yml.
See docs/homebrew.md for upgrade, uninstall, version pinning, integrity verification, and tap maintenance details.
Tap/release prerequisites:
- Tap repository exists and is writable (default target:
phenixblue/homebrew-tap) - GitHub Actions secret
HOMEBREW_TAP_GITHUB_TOKENis configured with repo write access to the tap repository - Optional override environment variables for GoReleaser:
HOMEBREW_TAP_OWNERHOMEBREW_TAP_NAME
To test release packaging without publishing:
make release-snapshotFor local dry runs that include SBOM generation and Homebrew formula output but skip signing:
make release-localRelease mode comparison:
| Mode | Command/Trigger | Publish GitHub Release | Publish Homebrew Tap | Generate SBOM | Sign Artifacts |
|---|---|---|---|---|---|
| Local snapshot | make release-snapshot |
No | No | Yes | Yes (requires local cosign auth) |
| Local packaging dry run | make release-local |
No | No | Yes | No |
| CI release | Push tag v* |
Yes | Yes | Yes | Yes (OIDC in Actions) |
Environment variables use the KAPTURE_ prefix.
KAPTURE_OUTPUT(tableorjson, default:table)KAPTURE_TIMEOUT(Go duration string, default:30s)KAPTURE_CONCURRENCY(default:4)
Scan command supports:
--kubeconfigto set a kubeconfig path--contextto override kube context--checkand--exclude-checkto include/exclude by check ID--namespaceand--exclude-namespaceto scope namespace-based coverage controls (supports glob patterns liketenant-*)--categoryand--severityto filter findings--engineto select evaluator backend (goandrego)--policy-fileto provide a custom Rego policy file withdata.kapture.findingsoutput--policy-bundleto provide a local directory or HTTPS.tar.gzURL of.regofiles with optionalmetadata.json--bundle-subdirto point at a subdirectory within a remote archive (for monorepo layouts)--show-runbookto append compact runbook hints for failing findings--collector-datato inject pre-collected node/cluster data intoinput.cluster.collectorsfor Rego policies
Namespace scoping precedence for namespace-based coverage controls:
- system namespaces are always excluded first
--namespaceinclude filters are applied next (if provided)--exclude-namespacefilters are applied last and win on conflicts
Rego finding contract is validated strictly. Each finding must include:
checkIdtitlecategoryseverity(info,warning/warn, orerror)message
Baseline control findings also include:
reasonCodefor machine-parseable outcome classificationevidencemap with preflight signal states used in the decisionremediationIdfor stable runbook lookupremediationguidance when action is required
Top-level JSON report metadata includes scan execution context:
metadata.engineevaluator backend used (goorrego)metadata.namespaceIncludeandmetadata.namespaceExcludefilters in effectmetadata.clusterContextHashdeterministic hash for cluster context correlationmetadata.clusterContextHashVersionhash algorithm/input contract version (currentlyv1)metadata.durationMillisscan runtime in millisecondsmetadata.policyFileandmetadata.policyBundlewhen providedmetadata.kubeContextandmetadata.kubeconfigProvided
Runbook mappings are documented in docs/runbooks.md.
Additional documentation:
- docs/check-catalog.md
- docs/policy-authoring.md
- docs/collectors.md
- docs/operations.md
- docs/workflows.md
Policy bundle metadata (optional metadata.json):
schemaVersion: currentlyv1alpha1policyVersion: informational version for your bundleminBinaryVersion: optional minimum CLI version (for example1.2.0)collectors: optional array ofCollectorConfigobjects thatkapture collectwill run automatically when--bundleis provided (see docs/collectors.md)
Checked-in baseline Rego bundle:
policy/baseline/baseline.regopolicy/baseline/metadata.json
If cluster connectivity is unavailable, the command emits degraded-mode findings instead of crashing.
Exit codes:
0: no failing findings2: policy/check violations detected3: partial/degraded scan (for example, cluster connectivity/discovery limitations)
make fmt
make test
make build
make e2e-kind-pass
make e2e-kind-failThe e2e-kind-pass and e2e-kind-fail targets call scripts/e2e_kind_scan.sh to:
- create a kind cluster profile
- install/configure KubeVirt
- run
kapture scanagainst that cluster
Behavior:
make e2e-kind-pass: expects scan exit code0make e2e-kind-fail: expects scan exit code to be non-zero
Useful environment variables:
KUBEVIRT_VERSION(defaultv1.2.2)SCAN_ENGINE(defaultgo)VM_COUNT(default3)WAIT_FOR_VMIS(trueby default)VMI_WAIT_TIMEOUT_SECONDS(default180)VMI_WAIT_INTERVAL_SECONDS(default5)CLUSTER_NAME(mode-specific default)TARGET_NAMESPACE(mode-specific default)RECREATE_CLUSTER(trueby default)
Manual CI execution:
- Use GitHub Actions workflow
e2e-manual(workflow_dispatch) to run the same pass/fail profiles on demand.