Context
Multiple safety standards now require structured safety cases — not just traceability matrices, but goal-structured arguments with claims backed by evidence:
- UL 4600 (autonomous systems): Requires safety cases with goals, arguments, and evidence. Every claim must be supported by verifiable, auditable evidence. Traceability from hazard → mitigation → test result is mandatory.
- ISO/PAS 8800 (AI in road vehicles, published Dec 2024): Requires assurance arguments for AI safety — structured claims about AI element safety backed by lifecycle evidence.
- ISO 21448 SOTIF: Requires systematic evaluation of functional insufficiencies with structured evidence.
Goal Structuring Notation (GSN) is the standard visual/structural notation for safety cases. It defines:
- Goals — safety claims (e.g., "System detects pedestrians in all lighting conditions")
- Strategies — decomposition rationale (e.g., "Argued over environmental conditions")
- Solutions — evidence (test reports, analysis results, verification records)
- Context — assumptions and scope
- Justification — rationale for argument choices
- Away goals — references to sub-arguments in other modules
No open-source git-native tool supports GSN as structured, validated artifacts. Existing tools are GUI-only (Astah GSN, ASCE), research prototypes (FASTEN.Safe, CertWare), or commercial (Adelard ASCE).
Design
New schema: schemas/safety-case.yaml
schema:
name: safety-case
version: "0.1.0"
extends: [common]
description: >
Structured safety case artifacts following GSN (Goal Structuring Notation).
Supports UL 4600, ISO/PAS 8800 assurance arguments, and general safety case management.
artifact-types:
- name: safety-goal
description: A top-level safety claim to be demonstrated
fields:
- name: claim
type: text
required: true
- name: goal-type
type: string
required: false
allowed-values: [system-level, element-level, operational, derived]
- name: safety-strategy
description: Decomposition rationale — how a goal is broken into sub-goals
fields:
- name: rationale
type: text
required: true
link-fields:
- name: decomposes
link-type: decomposes
target-types: [safety-goal]
required: true
cardinality: exactly-one
- name: safety-solution
description: Evidence supporting a goal (test report, analysis, review record)
fields:
- name: evidence-type
type: string
required: true
allowed-values: [test-report, analysis, simulation, review, field-data, formal-proof]
- name: evidence-ref
type: string
required: false
description: Path or ID of the evidence artifact
link-fields:
- name: supports
link-type: supports
target-types: [safety-goal]
required: true
cardinality: one-or-many
- name: safety-context
description: Assumptions, scope, or environmental conditions for a goal
link-fields:
- name: scopes
link-type: scopes
target-types: [safety-goal]
required: true
- name: safety-justification
description: Rationale for an argument choice
link-fields:
- name: justifies
link-type: justifies
target-types: [safety-strategy, safety-goal]
required: true
- name: away-goal
description: Reference to a sub-argument in another module or project
fields:
- name: module-ref
type: string
required: true
description: External module or project reference
- name: external-goal-id
type: string
required: true
link-fields:
- name: delegates-to
link-type: delegates-to
target-types: [safety-goal]
required: false
# ISO/PAS 8800 specific
- name: ai-assurance-argument
description: Assurance argument for an AI element (ISO/PAS 8800)
fields:
- name: ai-element
type: string
required: true
description: The AI/ML element this argument covers
- name: lifecycle-phase
type: string
required: false
allowed-values: [requirements, design, training, verification, validation, deployment, monitoring]
link-fields:
- name: argues-for
link-type: argues-for
target-types: [safety-goal]
required: true
Traceability rules
- Every
safety-goal must be supported by at least one safety-solution or decomposed via safety-strategy (error)
- Every
safety-strategy must decompose into sub-goals (warning)
- Every
safety-solution must reference evidence (warning)
- Every top-level goal (no parent) must have context (warning)
- Every
ai-assurance-argument must link to a safety goal (error)
Bridge schemas
Dashboard views
- GSN diagram rendering (via etch layout engine) — goals as rectangles, strategies as parallelograms, solutions as circles, connected by arrows
- Safety case completeness checker — which goals lack evidence?
- Export: GSN XML for interop with Astah/ASCE, HTML for review
Relevance to AI safety
ISO/PAS 8800 requires assurance arguments for AI elements. The ai-assurance-argument type captures the AI safety lifecycle phases and links to the broader safety case. Combined with STPA-for-AI analysis (losses from ML misclassification, UCAs from autonomous control), this gives a complete AI safety case in git.
References
Context
Multiple safety standards now require structured safety cases — not just traceability matrices, but goal-structured arguments with claims backed by evidence:
Goal Structuring Notation (GSN) is the standard visual/structural notation for safety cases. It defines:
No open-source git-native tool supports GSN as structured, validated artifacts. Existing tools are GUI-only (Astah GSN, ASCE), research prototypes (FASTEN.Safe, CertWare), or commercial (Adelard ASCE).
Design
New schema:
schemas/safety-case.yamlTraceability rules
safety-goalmust be supported by at least onesafety-solutionor decomposed viasafety-strategy(error)safety-strategymust decompose into sub-goals (warning)safety-solutionmust reference evidence (warning)ai-assurance-argumentmust link to a safety goal (error)Bridge schemas
safety-case-stpa-bridge.yaml: STPA hazards/losses → safety goals. System constraints → safety strategies. Controller constraints → sub-goals.safety-case-aspice-bridge.yaml: Verification verdicts → safety solutions. Requirements → safety contexts.safety-case-eu-ai-act-bridge.yaml: Risk assessments (EU AI Act compliance schema (schemas/eu-ai-act.yaml) — high-risk AI system documentation #99) → safety goals. Risk mitigations → safety solutions.Dashboard views
Relevance to AI safety
ISO/PAS 8800 requires assurance arguments for AI elements. The
ai-assurance-argumenttype captures the AI safety lifecycle phases and links to the broader safety case. Combined with STPA-for-AI analysis (losses from ML misclassification, UCAs from autonomous control), this gives a complete AI safety case in git.References
stpa.yaml,stpa-sec.yaml(bridge targets)