Skip to content

l22-io/LLMAudit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llmaudit

LLM Security Audit Framework -- scan your LLM deployments and infrastructure for security issues.

llmaudit checks for common security misconfigurations in LLM services, API keys, and orchestration frameworks, aligned with OWASP LLM Top 10 and MITRE ATLAS.

Note: This is an early release (v0.1.0) with a foundational set of checks. The framework is functional and extensible, but coverage is limited. See the Roadmap for planned additions.

Installation

pip install .

Or for development:

pip install -e ".[dev]"

Quick Start

# Autodiscover local LLM services and run all checks
llmaudit scan

# Scan a specific endpoint
llmaudit scan --target http://localhost:11434

# Scan with a config file
llmaudit scan --config audit.yaml

# Run only specific check categories
llmaudit scan --checks ollama,openai

# List all available checks
llmaudit list-checks

Example Output

llmaudit v0.1.0 - LLM Security Audit Framework

[*] Target: http://127.0.0.1:11434 (ollama via port_scan)
[*] Running 19 checks...

CRITICAL  Unauthenticated model access
          API returned 3 models without auth
          Remediation: Bind Ollama to 127.0.0.1, use a reverse proxy with auth,
          or set OLLAMA_ORIGINS to restrict access.
          Ref: OWASP LLM06, MITRE ATLAS AML.T0044

WARNING   Model enumeration exposed
          Exposed models: llama2, codellama, mistral
          Remediation: Restrict /api/tags behind authentication or network policy.
          Ref: OWASP LLM01, MITRE ATLAS AML.T0016

PASS      No GPU memory exposure detected

[*] Results: 1 critical, 1 warning, 1 pass, 0 info

Check Categories

Category Target Checks
ollama Self-hosted Ollama instances Auth, network exposure, CORS, version
vllm Self-hosted vLLM servers Auth, debug endpoints, model loading
openai OpenAI API usage Key exposure, permissions, usage limits
langchain LangChain projects (static analysis) Code exec, prompt injection, tool permissions
general Any LLM endpoint Prompt injection testing, DoS, key leakage

Configuration

Create an audit.yaml for explicit target configuration:

targets:
  - type: ollama
    url: http://gpu-server:11434
  - type: openai
    api_key_env: OPENAI_API_KEY
  - type: langchain
    project_dir: /path/to/your/app

Autodiscovery

When run without --target or --config, llmaudit probes localhost for:

  • Well-known ports: Ollama (11434), vLLM (8000), LocalAI (8080), LiteLLM (4000)
  • Running processes: ollama, vllm, text-generation-server
  • Environment variables: OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
  • Config files: .env files, docker-compose.yaml, requirements.txt

Use --no-discovery to skip autodiscovery.

Exit Codes

Code Meaning
0 No critical or warning findings
1 At least one critical finding
2 Warnings only (no critical)

Writing Custom Checks

Each check is a YAML + Python pair in src/llmaudit/checks/<category>/:

YAML (metadata):

id: my-custom-check
name: My Custom Check
category: ollama
severity: warning
description: What this check does
remediation: How to fix it
references:
  owasp_llm: LLM01
tags: [network]

Python (logic):

import httpx
from llmaudit.models import CheckResult, Status

def run(target, config):
    # Your check logic here
    return CheckResult(status=Status.PASS)

Roadmap

v0.2

  • --json and --sarif output for CI/CD integration
  • HTML report generation with executive summary
  • AWS Bedrock and Azure OpenAI check categories
  • CrewAI / AutoGen orchestration checks

v0.3

  • Agentic AI safeguard checks (tool-use boundaries, delegation chains)
  • LLM firewall integration testing (Lakera, Prompt Security, Rebuff)
  • Runtime guardrail validation (NeMo Guardrails, Guardrails AI)
  • Docker/Kubernetes deployment scanning

Future

  • SARIF integration with GitHub Code Scanning
  • Policy-as-code support (OPA/Rego for AI security policies)
  • Continuous monitoring mode (llmaudit watch)
  • Plugin marketplace for community-contributed checks
  • Compliance report mapping (EU AI Act, NIST AI RMF, ISO 42001)

Frameworks & Standards

llmaudit checks are mapped to:

License

MIT

About

LLM Security Audit Framework - scan your LLM deployments for security issues

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages