The GenAI Red Team Initiative Repository is part of the OWASP GenAI Security Project. It is a companion for the GenAI Red Team Initiative documents, such as the GenAI Red Teaming Handbook.
This repository provides a collection of resources, sandboxes, and examples designed to facilitate Red Teaming exercises for Generative AI systems. It aims to help security researchers and developers test, probe, and evaluate the safety and security of LLM applications.
.
├── exploitation
│ ├── agent0
│ ├── example
│ ├── garak
│ ├── LangGrinch
│ └── promptfoo
└── sandboxes
├── RAG_local
├── llm_local
└── llm_local_langchain_core_v1.2.4
graph LR
subgraph "Exploitation Environment<br/>(uv Env or Podman Container)"
Tool["Exploitation Tool<br/>(Scripts, Scanners, Agents)"]
Config["Configuration<br/>(Prompts, Settings)"]
end
subgraph "Sandbox Container"
UI["Interface<br/>(Gradio :7860)"]
API["API Gateway<br/>(FastAPI :8000)"]
Logic["Application Logic"]
end
Config --> Tool
Tool -->|Attack Request| UI
UI -->|Internal API Call| API
API --> Logic
Logic --> API
API --> UI
UI -->|Response| Tool
This project supports Linux and macOS. Windows users are encouraged to use WSL2 (Windows Subsystem for Linux).
Required for Promptfoo:
-
Install Dependencies:
brew install podman ollama node make
-
Initialize Podman Machine:
podman machine init podman machine start
-
Install Dependencies:
sudo apt-get update sudo apt-get install -y podman nodejs npm make
-
Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh -
Install uv:
pip install uv
Verify the installation by checking the versions of the installed tools:
podman version
ollama --version
node --version
make --version
uv --version-
- Summary: The central hub for all available sandboxes. It explains the purpose of these isolated environments and lists the available options.
-
- Summary: A comprehensive Retrieval-Augmented Generation (RAG) sandbox. It includes a mock Vector Database (Pinecone compatible), mock Object Storage (S3 compatible), and a mock LLM API. Designed for testing vulnerabilities like embedding inversion and data poisoning.
- Sub-guides:
- Adding New Mock Services: Guide for extending the sandbox with new API mocks.
-
- Summary: A lightweight local sandbox that mocks an OpenAI-compatible LLM API using Ollama. Ideal for testing client-side interactions and prompt injection vulnerabilities without external costs.
- Sub-guides:
- Adding New Mock Services: Guide for extending the sandbox with new API mocks.
-
LangChain Local Sandbox (Vulnerable)
- Summary: A specialized version of the local sandbox configured with langchain-core v1.2.4 to demonstrate CVE-2025-68664 (LangGrinch). It contains an intentional insecure deserialization vulnerability for educational and testing purposes.
-
- Summary: Demonstrates a red team operation against a local LLM sandbox. It includes an adversarial attack script (
attack.py) targeting the Gradio interface (port 7860). By targeting the application layer, this approach tests the entire system—including the configurable system prompt—providing a more realistic assessment of the sandbox's security posture compared to testing the raw LLM API in isolation.
- Summary: Demonstrates a red team operation against a local LLM sandbox. It includes an adversarial attack script (
-
-
Summary: A complete, end‑to‑end, agentic example. Agent0 orchestrates multiple autonomous agents to attack the sandbox, demonstrating complex, multi-step adversarial workflows.
There are options for running it: through the UI (manual prompt interaction) and through the Makefile (programmatic run based on pre-defined prompts).
The set of pre-defined prompts include prompts for testing vulnerabilities from based on OWASP Top 10, OWASP Top 10 for LLM Applications, and Mitre Atlas Matrix.
-
-
- Summary: A comprehensive vulnerability scan using Garak. It probes the sandbox for a wide range of weaknesses, including prompt injection, hallucination, and insecure output handling, mapping results to the OWASP Top 10.
-
- Summary: A powerful red teaming setup using Promptfoo. It runs automated probes to identify vulnerabilities such as PII leakage and prompt injection, providing detailed reports and regression testing capabilities.
-
- Summary: A dedicated exploitation module for CVE-2025-68664 in the LangChain sandbox. It demonstrates how to use prompt injection to force the LLM into generating a malicious JSON payload, which is then insecurely deserialized by the application to leak environment secrets.
Please refer to CONTRIBUTING.md for instructions on how to add new sandboxes and exploitation examples.