Skip to content

doflink/cgpu-attest

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

cgpu-attest — Confidential GPU Attestation Toolkit

A modular Python toolkit for attesting confidential GPUs. Verifies that GPU firmware, drivers, and configuration are authentic and untampered by validating hardware measurements against vendor-signed reference manifests.

Currently supported: NVIDIA H100, H200, and Blackwell (B100/B200/GB200). Planned: AMD confidential GPU support.

Demo

Attesting an NVIDIA H200

How attestation works

Confidential Computing GPUs produce cryptographically signed measurement reports during secure boot. Each report contains hashes of every firmware component loaded into the GPU. Attestation compares these runtime measurements against golden values published by the GPU vendor in signed Reference Integrity Manifests (RIMs).

The toolkit supports two attestation modes:

  • Remote (default) — Evidence is collected from the GPU and submitted to the vendor's remote attestation service (e.g. NVIDIA NRAS), which validates everything server-side and returns a signed JWT.
  • Local — The vendor SDK validates the certificate chain and measurements locally using OCSP and RIM files fetched from the vendor's RIM service. No evidence leaves your machine.

Installation

# Full install with NVIDIA SDK + JWT support (recommended for NVIDIA GPUs):
pip install "cgpu-attest[nvidia]"

# Minimal NVIDIA install (NVML fallback only, no SDK):
pip install "cgpu-attest[nvidia-minimal]"

# Base install (if you manage GPU libraries separately):
pip install cgpu-attest

Requirements

  • Python 3.10+
  • Linux with GPU driver supporting Confidential Computing
  • For NVIDIA: driver ≥ 525 with CC mode enabled

Quick start

Command line

# Attest all detected GPUs (default: remote mode):
cgpu-attest

# Attest only H200 GPUs:
cgpu-attest --gpu-family H200

# Local mode (no evidence sent to remote service):
cgpu-attest --mode local

# Save results as JSON:
cgpu-attest --output results.json

# Behind a corporate proxy:
cgpu-attest --http-proxy http://proxy.corp:3128

Python API

from cgpu_attest import run_attestation

results = run_attestation(mode="remote")
for r in results:
    print(f"{r.gpu_name}: {r.overall_status}")

# Or with more control:
from cgpu_attest import attest_gpu
from cgpu_attest.gpu_discovery import enumerate_gpus, init_nvml, shutdown_nvml
from cgpu_attest.orchestrator import generate_nonce

init_nvml()
for gpu in enumerate_gpus():
    result = attest_gpu(gpu, nonce=generate_nonce(), mode="local")
    print(result.overall_status, result.claims)
shutdown_nvml()

CLI reference

cgpu-attest [OPTIONS]
Option Description
--mode {remote,local} remote sends evidence to attestation service (default); local validates via SDK + OCSP
--gpu-family FAMILY Only attest GPUs of this family (e.g. H200, H100). Omit to attest all
--nras-url URL Custom NVIDIA Remote Attestation Service endpoint
--ocsp-url URL Custom OCSP endpoint
--output FILE Write JSON results to FILE
--http-proxy URL HTTP/HTTPS proxy for outbound requests
--verbose, -v Enable DEBUG logging
--test-rim-dir DIR Testing only. Use local RIM files instead of vendor service

Package structure

cgpu_attest/
├── __init__.py              # Public API
├── __main__.py              # python -m cgpu_attest
├── cli.py                   # Argument parsing, summary table
├── constants.py             # Service URLs, claim keys, GPU profile registry
├── deps.py                  # Lazy-import guards (pynvml, SDK, PyJWT)
├── models.py                # GpuInfo, AttestationEvidence, AttestationResult
├── gpu_discovery.py         # NVML init/shutdown, enumerate_gpus()
├── evidence.py              # Evidence collection (SDK + NVML fallback)
├── jwt_helpers.py           # JWT decoding, SDK token list parsing
├── attest_remote.py         # Remote attestation (SDK + REST fallback)
├── attest_local.py          # Local attestation (SDK + OCSP)
├── orchestrator.py          # attest_gpu(), run_attestation()
├── testing.py               # Dev/test only: local RIM directory patching
└── gpu_profiles/            # One file per GPU family
    ├── h200.py              # NVIDIA H200
    ├── h100.py              # NVIDIA H100
    └── blackwell.py         # NVIDIA B100, B200, GB200

Adding a new GPU family

Create a new file in gpu_profiles/ and register the profile:

# gpu_profiles/mi300x.py
"""AMD MI300X GPU profile."""

from cgpu_attest.constants import register_gpu_profile

register_gpu_profile(
    "MI300X",
    name_patterns=["MI300X"],
    architecture="CDNA3",
)

Then import it in gpu_profiles/__init__.py:

from cgpu_attest.gpu_profiles import h100, h200, blackwell, mi300x

No other code changes needed — the new GPU will be auto-detected and attested.

JSON output format

When using --output, results are written as:

{
  "tool_version": "2.0",
  "mode": "remote",
  "timestamp": "2026-04-01T08:00:00Z",
  "results": [
    {
      "gpu_uuid": "GPU-9ef0b912-...",
      "gpu_name": "NVIDIA H200 NVL",
      "overall_status": "PASS",
      "claims": { ... },
      "token": "<JWT string>",
      "errors": [],
      "verified_at": "2026-04-01T08:00:00Z"
    }
  ]
}

Testing with local RIM files

Development/testing feature only. Disables RIM signature verification. Never use in production.

cgpu-attest --test-rim-dir ./local_rims

Measurement indices (NVIDIA Hopper)

When running with --verbose, the SDK logs 64 measurement indices (0–63). Key indices for H100/H200:

Index Component Description
7 FSP firmware Hardware root of trust
21–22 VBIOS VBIOS image and configuration
25–27 PMU / GSP-RM / ACR Core GPU trusted execution firmware
29–31 Driver Kernel driver, config, GSP firmware
37–41 Additional firmware Secondary microcontrollers

License

MIT — see LICENSE.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages