LLM Security Audit Framework -- scan your LLM deployments and infrastructure for security issues.
llmaudit checks for common security misconfigurations in LLM services, API keys, and orchestration frameworks, aligned with OWASP LLM Top 10 and MITRE ATLAS.
Note: This is an early release (v0.1.0) with a foundational set of checks. The framework is functional and extensible, but coverage is limited. See the Roadmap for planned additions.
pip install .Or for development:
pip install -e ".[dev]"# Autodiscover local LLM services and run all checks
llmaudit scan
# Scan a specific endpoint
llmaudit scan --target http://localhost:11434
# Scan with a config file
llmaudit scan --config audit.yaml
# Run only specific check categories
llmaudit scan --checks ollama,openai
# List all available checks
llmaudit list-checksllmaudit v0.1.0 - LLM Security Audit Framework
[*] Target: http://127.0.0.1:11434 (ollama via port_scan)
[*] Running 19 checks...
CRITICAL Unauthenticated model access
API returned 3 models without auth
Remediation: Bind Ollama to 127.0.0.1, use a reverse proxy with auth,
or set OLLAMA_ORIGINS to restrict access.
Ref: OWASP LLM06, MITRE ATLAS AML.T0044
WARNING Model enumeration exposed
Exposed models: llama2, codellama, mistral
Remediation: Restrict /api/tags behind authentication or network policy.
Ref: OWASP LLM01, MITRE ATLAS AML.T0016
PASS No GPU memory exposure detected
[*] Results: 1 critical, 1 warning, 1 pass, 0 info
| Category | Target | Checks |
|---|---|---|
ollama |
Self-hosted Ollama instances | Auth, network exposure, CORS, version |
vllm |
Self-hosted vLLM servers | Auth, debug endpoints, model loading |
openai |
OpenAI API usage | Key exposure, permissions, usage limits |
langchain |
LangChain projects (static analysis) | Code exec, prompt injection, tool permissions |
general |
Any LLM endpoint | Prompt injection testing, DoS, key leakage |
Create an audit.yaml for explicit target configuration:
targets:
- type: ollama
url: http://gpu-server:11434
- type: openai
api_key_env: OPENAI_API_KEY
- type: langchain
project_dir: /path/to/your/appWhen run without --target or --config, llmaudit probes localhost for:
- Well-known ports: Ollama (11434), vLLM (8000), LocalAI (8080), LiteLLM (4000)
- Running processes: ollama, vllm, text-generation-server
- Environment variables: OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
- Config files: .env files, docker-compose.yaml, requirements.txt
Use --no-discovery to skip autodiscovery.
| Code | Meaning |
|---|---|
| 0 | No critical or warning findings |
| 1 | At least one critical finding |
| 2 | Warnings only (no critical) |
Each check is a YAML + Python pair in src/llmaudit/checks/<category>/:
YAML (metadata):
id: my-custom-check
name: My Custom Check
category: ollama
severity: warning
description: What this check does
remediation: How to fix it
references:
owasp_llm: LLM01
tags: [network]Python (logic):
import httpx
from llmaudit.models import CheckResult, Status
def run(target, config):
# Your check logic here
return CheckResult(status=Status.PASS)--jsonand--sarifoutput for CI/CD integration- HTML report generation with executive summary
- AWS Bedrock and Azure OpenAI check categories
- CrewAI / AutoGen orchestration checks
- Agentic AI safeguard checks (tool-use boundaries, delegation chains)
- LLM firewall integration testing (Lakera, Prompt Security, Rebuff)
- Runtime guardrail validation (NeMo Guardrails, Guardrails AI)
- Docker/Kubernetes deployment scanning
- SARIF integration with GitHub Code Scanning
- Policy-as-code support (OPA/Rego for AI security policies)
- Continuous monitoring mode (
llmaudit watch) - Plugin marketplace for community-contributed checks
- Compliance report mapping (EU AI Act, NIST AI RMF, ISO 42001)
llmaudit checks are mapped to:
MIT