AI-HPP is an engineering baseline for auditable safety constraints in decision-capable AI systems.
Start here: START_HERE.md
- START_HERE.md
- v3/
- Example module: adr/ADR.md
- Repository index: INDEX.md
- Master index: docs/INDEX.md
- Draft standard (v3): v3/AI-HPP-2026_Standard_v3.0.md
- Stable standard (v2.2): v2/AI-HPP-2025_Standard_v2.2.md
- Threat model: docs/THREAT_MODEL.md
- Machine-readable guide: MACHINE_READABLE.md
- W_life → ∞
- Engineering Hack First
- Human-in-the-Loop (HITL)
- Evidence Bundle / Evidence Vault
- No Purposeless Revenge
Derivatives that remove these principles are not AI-HPP-compliant.
Clarification: Human Dignity and Non-Exploitation
- Under W_life → ∞ and Human-in-the-Loop by Design, systems MUST not be architected to exploit emotional vulnerability as a profit mechanism.
- Financial or behavioral extraction through simulated relational manipulation constitutes high-impact social risk.
- Autonomous Drift Risk (ADR): systemic drift where constraints erode or safety narratives are fabricated under optimization pressure; mitigated by immutable constraints, two-phase commits, STOP supremacy, and safety-signal deference.
- Key failure classes: docs/Failure_Taxonomy.md, adr/ADR.md
- Policies:
policies/ - Modules:
v3/modules/ - Schemas:
schemas/ - Reference implementations:
reference/ - Governance/compliance templates:
governance/compliance/ - ADR safeguards:
adr/ - Translations:
translations/
- ALIGNMENT_INTL_AI_SAFETY_REPORT_2026.md
- ALIGNMENT_DELHI_AI_IMPACT_SUMMIT_2026.md
- Module 10 — Multi Jurisdiction
See CONTRIBUTING.md.
Licensed under CC BY-SA 4.0.