-
Notifications
You must be signed in to change notification settings - Fork 0
EU AI Act compliance schema (schemas/eu-ai-act.yaml) — high-risk AI system documentation #99
Description
Context
The EU AI Act's provisions for high-risk AI systems become applicable on August 2, 2026 — 4 months away. Articles 9-15 and Annex IV mandate extensive documentation, risk management, and traceability for high-risk AI systems. Fines reach up to €35 million or 7% of global annual turnover for non-compliance.
Rivet already has the infrastructure for this:
- STPA schema covers safety analysis (losses, hazards, UCAs, loss scenarios, control structure)
- ASPICE schema covers the V-model traceability chain (stakeholder → system → software → verification)
- Cybersecurity schema covers ISO 21434 / ASPICE SEC.1-4
- Schema composition via
extendsand bridge schemas (Spec-driven development: schema packages, bridges, guide API, CRUD CLI #93) enables layering
What's missing: artifact types that directly map to the EU AI Act's Annex IV documentation requirements, plus traceability rules that enforce the Act's specific obligations.
Strategic value: No open-source traceability tool targets EU AI Act compliance for AI-enabled safety-critical systems. DOORS, Polarion, and codebeamer require expensive licenses and don't natively understand the Act's structure. Rivet can be the git-native, schema-validated compliance backbone — especially for teams already using STPA/ASPICE schemas.
EU AI Act Requirements Mapping
Annex IV: Technical Documentation (9 mandatory sections)
The schema maps each Annex IV section to one or more artifact types:
| Annex IV Section | Artifact Type(s) | Key Fields | Key Links |
|---|---|---|---|
| §1: General description | ai-system-description |
intended-purpose, provider, version, hardware-deps, software-deps, deployment-forms | — |
| §2: Development & design | design-specification |
algorithms, design-choices, rationale, optimization-objectives, training-data-provenance | satisfies → ai-system-description |
| §2 (data) | data-governance-record |
data-sources, collection-method, labeling-method, preparation-steps, bias-assessment | governs → design-specification |
| §2 (pre-trained) | third-party-component |
provider, version, license, intended-use, known-limitations | used-by → design-specification |
| §3: Monitoring & control | monitoring-measure |
mechanism-type, logging-scope, alert-conditions, human-intervention-capability | monitors → ai-system-description |
| §4: Performance metrics | performance-evaluation |
metric-name, value, methodology, population-subgroups, bias-results | evaluates → design-specification |
| §5: Risk management | risk-assessment |
risk-description, likelihood, severity, affected-rights, risk-level | leads-to → ai-system-description |
| §5 (mitigation) | risk-mitigation |
measure-description, residual-risk, effectiveness-evidence | mitigates → risk-assessment |
| §6: Lifecycle changes | lifecycle-change |
change-type, version-from, version-to, rationale, compliance-impact | modifies → design-specification |
| §7: Standards | standards-reference |
standard-id, title, coverage-scope, partial-application-rationale | applied-to → ai-system-description |
| §8: Declaration | conformity-declaration |
declaration-date, notified-body, conformity-scope | declares → ai-system-description |
| §9: Post-market monitoring | post-market-plan |
monitoring-scope, drift-detection, incident-reporting, review-frequency | monitors-post-market → ai-system-description |
Articles 9-15: Obligation-specific artifact types
| Article | Requirement | Artifact Type | Purpose |
|---|---|---|---|
| Art. 9 | Risk management system | risk-management-process |
Documents the continuous, iterative risk management process |
| Art. 9(2a) | Known risk identification | risk-assessment |
Individual risk entries with likelihood/severity |
| Art. 9(2b) | Foreseeable misuse risks | misuse-risk |
Risks from reasonably foreseeable misuse |
| Art. 9(2c) | Post-market emerging risks | emerging-risk (links to post-market-plan) |
Risks discovered after deployment |
| Art. 9(4) | Risk mitigation measures | risk-mitigation |
Targeted measures for identified risks |
| Art. 10 | Data governance | data-governance-record |
Training/validation/test data provenance and quality |
| Art. 11 | Technical documentation | Schema itself (Annex IV coverage) | The schema enforces Annex IV completeness |
| Art. 12 | Record-keeping / logging | logging-specification |
What events are logged, retention, access |
| Art. 13 | Transparency | transparency-record |
Information provided to deployers, user-facing docs |
| Art. 14 | Human oversight | human-oversight-measure |
Specific human intervention capabilities |
| Art. 15(1) | Accuracy | accuracy-evaluation |
Accuracy metrics and methodology |
| Art. 15(2) | Robustness | robustness-evaluation |
Resilience testing, adversarial evaluation |
| Art. 15(3) | Cybersecurity | cybersecurity-evaluation |
Security testing, vulnerability assessment |
Traceability rules
traceability-rules:
# Every AI system must have a complete risk management process
- name: system-has-risk-management
description: Every AI system description must be covered by a risk management process (Art. 9)
source-type: ai-system-description
required-backlink: manages-risk-for
from-types: [risk-management-process]
severity: error
# Every identified risk must have a mitigation measure
- name: risk-has-mitigation
description: Every risk assessment must have at least one mitigation measure (Art. 9(4))
source-type: risk-assessment
required-backlink: mitigates
from-types: [risk-mitigation]
severity: error
# Every AI system must have data governance documentation
- name: system-has-data-governance
description: Training/validation data must be governed (Art. 10)
source-type: design-specification
required-backlink: governs
from-types: [data-governance-record]
severity: error
# Every AI system must have monitoring measures
- name: system-has-monitoring
description: AI system must have monitoring and control documentation (Art. 12)
source-type: ai-system-description
required-backlink: monitors
from-types: [monitoring-measure]
severity: error
# Every AI system must have human oversight documentation
- name: system-has-human-oversight
description: Human oversight measures must be documented (Art. 14)
source-type: ai-system-description
required-backlink: overseen-by
from-types: [human-oversight-measure]
severity: error
# Every AI system must have accuracy, robustness, cybersecurity evaluations
- name: system-has-accuracy
description: Accuracy metrics must be documented (Art. 15(1))
source-type: design-specification
required-backlink: evaluates
from-types: [accuracy-evaluation, performance-evaluation]
severity: error
- name: system-has-robustness
description: Robustness evaluation must be documented (Art. 15(2))
source-type: design-specification
required-backlink: evaluates
from-types: [robustness-evaluation, performance-evaluation]
severity: warning
# Every system must reference applicable standards
- name: system-has-standards
description: Applicable harmonised standards must be listed (Annex IV §7)
source-type: ai-system-description
required-backlink: applied-to
from-types: [standards-reference]
severity: warning
# Every system must have a conformity declaration
- name: system-has-declaration
description: EU declaration of conformity required (Annex IV §8)
source-type: ai-system-description
required-backlink: declares
from-types: [conformity-declaration]
severity: error
# Every system must have post-market monitoring
- name: system-has-post-market
description: Post-market monitoring plan required (Art. 72, Annex IV §9)
source-type: ai-system-description
required-backlink: monitors-post-market
from-types: [post-market-plan]
severity: error
# Risk management must cover foreseeable misuse
- name: risk-covers-misuse
description: Risk management should identify foreseeable misuse risks (Art. 9(2b))
source-type: risk-management-process
required-backlink: identified-by
from-types: [misuse-risk]
severity: warning
# Transparency records must exist
- name: system-has-transparency
description: Transparency information for deployers must be documented (Art. 13)
source-type: ai-system-description
required-backlink: transparency-for
from-types: [transparency-record]
severity: errorBridge schemas for STPA/ASPICE composition
The EU AI Act schema should compose with existing safety schemas:
# Bridge: eu-ai-act ↔ stpa
# Maps STPA safety analysis to EU AI Act risk management
schema:
name: eu-ai-act-stpa-bridge
version: "0.1.0"
extends: [eu-ai-act, stpa]
link-types:
- name: risk-identified-by-stpa
inverse: stpa-identifies-risk
description: Risk assessment derived from STPA hazard/loss analysis
source-types: [risk-assessment]
target-types: [hazard, sub-hazard, loss]
- name: mitigation-from-constraint
inverse: constraint-provides-mitigation
description: Risk mitigation derived from STPA system constraint
source-types: [risk-mitigation]
target-types: [system-constraint, controller-constraint]
traceability-rules:
- name: stpa-hazards-map-to-risks
description: STPA hazards should be linked to EU AI Act risk assessments
source-type: hazard
required-backlink: risk-identified-by-stpa
from-types: [risk-assessment]
severity: warning# Bridge: eu-ai-act ↔ aspice
# Maps ASPICE verification evidence to EU AI Act performance evaluation
schema:
name: eu-ai-act-aspice-bridge
version: "0.1.0"
extends: [eu-ai-act, aspice]
link-types:
- name: evaluation-from-verification
inverse: verification-supports-evaluation
description: Performance evaluation uses ASPICE verification evidence
source-types: [performance-evaluation, accuracy-evaluation, robustness-evaluation]
target-types: [sw-verification, sys-verification, verification-verdict]
traceability-rules:
- name: evaluations-backed-by-verification
description: Performance evaluations should reference ASPICE verification evidence
source-type: performance-evaluation
required-link: evaluation-from-verification
target-types: [sw-verification, sys-verification, verification-verdict]
severity: warningPhases
Phase 1: Core schema + Annex IV artifact types
schemas/eu-ai-act.yamlwith all artifact types from the mapping above- Link types specific to EU AI Act (manages-risk-for, governs, overseen-by, etc.)
- Traceability rules for Articles 9-15 obligations
rivet init --schema eu-ai-actwith starter artifacts- Example project in
examples/eu-ai-act/
Phase 2: Bridge schemas
eu-ai-act-stpa-bridge.yaml— map STPA analysis to risk managementeu-ai-act-aspice-bridge.yaml— map ASPICE verification to performance evaluationeu-ai-act-cybersecurity-bridge.yaml— map ISO 21434 to Art. 15(3) cybersecurity- Auto-resolution via Spec-driven development: schema packages, bridges, guide API, CRUD CLI #93 bridge manifest system
Phase 3: Compliance dashboard views
- Annex IV completeness checklist in dashboard (which sections have coverage, which don't)
- Risk register view (all risk-assessments with mitigation status)
- Compliance matrix: Article × artifact type coverage
Phase 4: Export for notified bodies
rivet export --format eu-ai-act-report— structured report following Annex IV sectionsrivet export --format eu-ai-act-dossier— full compliance dossier with all linked artifacts- PDF export option for submission to notified bodies
References
- EU AI Act full text
- Annex IV: Technical Documentation — 9 mandatory sections
- Article 9: Risk Management System — continuous lifecycle process
- Article 11: Technical Documentation — keep up-to-date, pre-market
- Annex III: High-Risk AI Systems — classification
- High-risk requirements overview
- Compliance timeline — Aug 2, 2026 deadline
- ISO 42001 (AI management system) — complementary standard
- ISO/IEC 23894 (AI risk management) — complementary standard
- Existing Rivet schemas:
stpa.yaml,aspice.yaml,cybersecurity.yaml— composition targets - Issue Spec-driven development: schema packages, bridges, guide API, CRUD CLI #93 (bridge schema mechanism) — enables cross-domain composition