This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The Artificial Intelligence Security Verification Standard (AISVS) focuses on providing developers, architects, and security professionals with a structured checklist to evaluate and verify the security and ethical considerations of AI-driven applications. Modeled after existing OWASP standards (such as the AISVS for web applications), AISVS will define categories of requirements for areas including:
- Training Data Governance & Bias Management
- User Input Validation
- Model Lifecycle Management & Change Control
- Infrastructure, Configuration & Deployment Security
- Access Control & Identity
- Supply Chain Security for Models, Frameworks & Data
- Model Behavior, Output Control & Safety Assurance
- Memory, Embeddings & Vector Database Security
- Autonomous Orchestration & Agentic Action Security
- Adversarial Robustness & Attack Resistance
- Privacy Protection & Personal Data Management
- Monitoring, Logging & Anomaly Detection
- Human Oversight and Trust
Please log issues if you find any bugs or if you have ideas. We may subsequently ask you to open a pull request based on the discussion in the issue.
The project is led by the two project leaders Jim Manico and Russ Memisyazici.
The entire project content is under the Creative Commons Attribution-Share Alike v4.0 license.