Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI Application Security(LLM) #389

Merged
merged 4 commits into from
Dec 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,14 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/) and this p

### Changed

## [v1.12](https://github.com/bugcrowd/vulnerability-rating-taxonomy/compare/v1.11...v1.12) - 2023-12-18
### Added
- Application Level DoS - Excessive Resource Consumption - Injection (Prompt) - VARIES
- AI Application Security - Large Language Model (LLM) Security - Prompt Injection - P1
- AI Application Security - Large Language Model (LLM) Security - LLM Output Handling - P1
- AI Application Security - Large Language Model (LLM) Security - Training Data Poisoning - P1
- AI Application Security - Large Language Model (LLM) Security - Excessive Agency/Permission Manipulation - P2

## [v1.11](https://github.com/bugcrowd/vulnerability-rating-taxonomy/compare/v1.10...v1.11) - 2023-11-20
### Added
- Sensitive Data Exposure - Disclosure of Secrets - PII Leakage/Exposure: VARIES
Expand Down
30 changes: 30 additions & 0 deletions mappings/cvss_v3/cvss_v3.json
Original file line number Diff line number Diff line change
Expand Up @@ -709,6 +709,10 @@
{
"id": "app_crash",
"cvss_v3": "AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:N"
},
{
"id": "excessive_resource_consumption",
"cvss_v3": "AV:N/AC:L/PR:N/UI:R/S:U/C:L/I:H/A:H"
}
]
},
Expand Down Expand Up @@ -1245,6 +1249,32 @@
{
"id": "indicators_of_compromise",
"cvss_v3": "AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:N"
},
{
"id": "ai_application_security",
"children": [
{
"id": "llm_security",
"children": [
{
"id": "prompt_injection",
"cvss_v3": "AV:N/AC:L/PR:N/UI:R/S:C/C:H/I:L/A:L"
},
{
"id": "llm_output_handling",
"cvss_v3": "AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:H/A:L"
},
{
"id": "training_data_poisoning",
"cvss_v3": "AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H"
},
{
"id": "excessive_agency_permission_manipulation",
"cvss_v3": "AV:N/AC:L/PR:L/UI:R/S:C/C:H/I:H/A:H"
}
]
}
]
}
]
}
4 changes: 4 additions & 0 deletions mappings/cwe/cwe.json
Original file line number Diff line number Diff line change
Expand Up @@ -388,6 +388,10 @@
}
]
},
{
"id": "ai_application_security",
"cwe": null
},
{
"id": "lack_of_binary_hardening",
"cwe": ["CWE-693"]
Expand Down
39 changes: 39 additions & 0 deletions mappings/remediation_advice/remediation_advice.json
Original file line number Diff line number Diff line change
Expand Up @@ -1803,6 +1803,45 @@
}
]
},
{
"id": "ai_application_security",
"children": [
{
"id": "llm_security",
"children": [
{
"id": "prompt_injection",
"remediation_advice": "Implement robust input sanitization to prevent malicious or unintended prompt execution. Establish strict access controls and usage monitoring to detect and prevent unauthorized or anomalous interactions with the LLM. Regularly review and update the model's training data and algorithms to reduce vulnerabilities. Educate users and developers on safe interaction practices with AI systems.",
"references": [
"https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection"
]
},
{
"id": "llm_output_handling",
"remediation_advice": "Implement output filtering and validation to ensure the LLM's responses are appropriate and secure. Use context-aware controls to manage how the LLM processes and responds to various inputs. Regularly audit and update the LLM to handle new types of outputs and emerging security threats. Train users on the potential risks associated with LLM outputs, particularly in sensitive applications.",
"references": [
"https://whylabs.ai/blog/posts/safeguard-monitor-large-language-model-llm-applications"
]
},
{
"id": "training_data_poisoning",
"remediation_advice": "Implement robust anomaly detection systems to identify and address poisoned data in real-time. Regularly retrain the LLM with clean, diverse, and representative datasets to correct any potential biases or vulnerabilities. Engage in continuous monitoring and auditing of the training process and data sources.",
"references": [
"https://owasp.org/www-project-top-10-for-large-language-model-applications/#:~:text=,security%2C%20accuracy%2C%20or%20ethical%20behavior",
"https://owasp.org/www-project-top-10-for-large-language-model-applications/Archive/0_1_vulns/Training_Data_Poisoning.html"
]
},
{
"id": "excessive_agency_permission_manipulation",
"remediation_advice": "Implement stringent access controls and define clear user permissions for interacting with the LLM. Employ regular audits and monitoring to detect and prevent unauthorized or excessive permission changes. Use role-based access control systems to manage user permissions effectively. Educate users and administrators about the risks of permission manipulation and establish protocols for safely managing access rights.",
"references": [
"https://owasp.org/www-project-ai-security-and-privacy-guide/#:~:text=,auditability%2C%20bias%20countermeasures%20and%20oversight"
]
}
]
}
]
},
{
"id": "indicators_of_compromise",
"remediation_advice": ""
Expand Down
53 changes: 52 additions & 1 deletion vulnerability-rating-taxonomy.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"metadata": {
"release_date": "2023-11-20T00:00:00+00:00"
"release_date": "2023-12-18T00:00:00+00:00"
},
"content": [
{
Expand Down Expand Up @@ -1352,6 +1352,19 @@
"name": "Application-Level Denial-of-Service (DoS)",
"type": "category",
"children": [
{
"id": "excessive_resource_consumption",
"name": "Excessive Resource Consumption",
"type": "subcategory",
"children": [
{
"id": "injection_prompt",
"name": "Injection (Prompt)",
"type": "variant",
"priority": null
}
]
},
{
"id": "critical_impact_and_or_easy_difficulty",
"name": "Critical Impact and/or Easy Difficulty",
Expand Down Expand Up @@ -2432,6 +2445,44 @@
}
]
},
{
"id": "ai_application_security",
"name": "AI Application Security",
"type": "category",
"children": [
{
"id": "llm_security",
"name": "Large Language Model (LLM) Security",
"type": "subcategory",
"children":[
{
"id": "prompt_injection",
"name": "Prompt Injection",
"type": "variant",
"priority": 1
},
{
"id": "llm_output_handling",
"name": "LLM Output Handling",
"type": "variant",
"priority": 1
},
{
"id": "training_data_poisoning",
"name": "Training Data Poisoning",
"type": "variant",
"priority": 1
},
{
"id": "excessive_agency_permission_manipulation",
"name": "Excessive Agency/Permission Manipulation",
"type": "variant",
"priority": 2
}
]
}
]
},
{
"id": "indicators_of_compromise",
"name": "Indicators of Compromise",
Expand Down
Loading