Skip to content

OWASP/PwnzzAI

Repository files navigation

🍕 Welcome to PwnzzAI(/pəʊnzɑː/) Shop – The Vulnerable Pizza Shop! 💻

PwnzzAI Shop

At PwnzzAI Shop, every slice serves up a helping of holistic AI security instruction. This hands-on learning platform prepares professionals to understand the protection of AI technologies broadly delivered through an engaging, practical pizza shop scenario.

Just as an incorrect ingredient can spoil a pizza, a flawed prompt or inadequate architecture can leave AI technologies vulnerable to severe threats, including information breaches, intellectual property theft, or illegitimate access.

Here, you'll explore practical examples of how vulnerabilities are created, exploited, and mitigated. You need to login as alice/alice or bob/bob for some pages. Grab a slice, dig in, and discover how delicious learning about AI security can be.

Table of Content

About

PwnzzAI Shop represents a hands-on learning platform purposefully built for instruction on AI technology protection. This intentionally insecure Flask-based web application showcases an extensive array of AI security weaknesses via an immersive pizza shop experience.

Founding partner: OWASP AI Exchange

Scope and Learning Goals

PwnzzAI is designed as a comprehensive, evolving learning platform aligned with the AI Exchange risk taxonomy Its scope expands progressively in step with OWASP AI Exchange security analysis frameworks and defensive guidance, ensuring long-term relevance as AI security practices mature. The project currently incorporates the OWASP Top 10 for LLMs, with an architecture intentionally mapped to AI Exchange risk classifications. Over time, PwnzzAI aims to mature into a structured learning ecosystem that supports:

  • AI protection and defense training, including self-directed learning paths

  • Hands-on security training programs with practical, scenario-driven exercises

  • Team-based learning initiatives for security engineers and practitioners

  • End-to-end AI security education, spanning design, development, deployment, and operations

Setup Instructions

Choose one of these 3 ways to run PwnzzAI:

  1. Docker with both images (PwnzzAI + Ollama)
  2. Docker with your own local/remote Ollama and only the PwnzzAI image
  3. Run the source code yourself

Before You Start (All Options)

  1. Install Docker Desktop from https://www.docker.com/products/docker-desktop.
  2. Open Docker Desktop and wait until it says Docker is running.
  3. Install Git if you do not already have it.
  4. Clone this repository and enter it:
git clone <REPO_URL>
cd PwnzzAI

Option 1: Docker (PwnzzAI + Ollama)

Use this option if you want Docker to run both the PwnzzAI app and Ollama for you.

  1. Start both containers:
docker compose up -d
  1. Verify both services are running:
docker compose ps
  1. Open the app in your browser:
http://localhost:8080
  1. In the app, go to the Basics page and run Ollama setup to pull models.

  2. Follow logs if needed:

# App logs
docker compose logs -f pwnzzai-app

# Ollama logs (optional)
docker compose logs -f ollama
  1. Stop everything when done:
docker compose down
  1. Optional full reset (removes saved Ollama models too):
docker compose down -v

If you publish the app image under another registry path, override the image name when starting:

PWNZZAI_IMAGE=ghcr.io/your-org/pwnzzai:latest docker compose up -d

Option 2: Docker (Your Own Ollama + PwnzzAI Image)

Use this option if Ollama is already running somewhere else and you only want to run PwnzzAI in Docker.

If you run Ollama on WSL and PwnzzAI in Docker, see OLLAMA_CONNECTION_TROUBLESHOOTING.md for connectivity fixes (Connection refused, host.docker.internal, binding, and env validation).

  1. Keep your Ollama service running.

  2. If your Ollama runs on a remote machine, set OLLAMA_HOST first.

Linux/macOS:

export OLLAMA_HOST=http://your-ollama-server:11434

Windows PowerShell:

$env:OLLAMA_HOST="http://your-ollama-server:11434"
  1. Start PwnzzAI using the external Ollama compose file:

Linux/macOS:

export OLLAMA_HOST=http://your-ollama-server:11434

Windows PowerShell:

$env:OLLAMA_HOST="http://your-ollama-server:11434"
  1. Visit http://localhost:8080 in your browser to see the application. Start from the Basic page and setup your lab.

  2. Follow app logs if needed:

docker compose -f docker-compose.external-ollama.yml logs -f pwnzzai-app
  1. Stop it when done:
docker compose -f docker-compose.external-ollama.yml down

Option 3: Run Source Code Yourself

Use this option if you want to run Python directly (without Docker for the app).

  1. Install Python 3.11.
  2. Create and activate a virtual environment.

Linux/macOS:

python -m venv venv
source venv/bin/activate

Windows PowerShell:

python -m venv venv
.\venv\Scripts\Activate.ps1
  1. Install dependencies:
python -m pip install --upgrade pip
pip install -r requirements.txt
  1. Make sure Ollama is available: http://localhost:11434 or another endpoint via OLLAMA_HOST.

  2. Run the app:

flask run --host=0.0.0.0 --port=8080
  1. Open:
http://localhost:8080

If you need more details, watch this walkthrough , which shows step-by-step setup.

Features

  • Pizza Service: Browse appetizing pizzas, explore customer feedback, contribute comments, and place virtual orders!
  • Extensive Weakness Showcases: Active demonstrations spanning OWASP LLM Top 10 and AI Exchange threat categories including model input threats, development-time threats, runtime security threats, and system-level AI risks.
  • Learning-Focused Design: Discover exploitation methods and defensive approaches via transparent descriptions, crafted for organized instruction programs and aligned with AI Exchange threat taxonomy.
  • Multi-Provider Implementation: Every demonstration operates with both commercial OpenAI and complimentary free Ollama frameworks, ensuring accessibility for all participants.
  • Standards-Based Content: Educational materials synchronized with OWASP AI Exchange, which serves as the foundational basis for emerging AI security standards such as ISO/IEC 27090 and prEN 18282.

AI Security Coverage

This platform showcases an extensive spectrum of AI protection challenges, aligned with the OWASP AI Exchange comprehensive threat taxonomy. Current implementation covers the OWASP Top 10 for LLM Applications and core AI Exchange threat categories, with an extensible architecture built for future AI risk coverage.

Learning Framework

Every weakness demonstrated in PwnzzAI features:

  1. Active demonstrations illustrating the vulnerability's mechanics
  2. Exploitation scenarios providing interactive attack exercises
  3. Hardened alternatives detailing defensive approaches

Implemented Vulnerabilities:

According to OWASP AI Exchange threats and OWASP Top 10 for LLM Applications 2025:

  1. Prompt Injection AI Exchange: Direct Prompt Injection, Indirect Prompt Injection, Top 10: LLM-01.

    • Direct Prompt Injection: Bypass system instructions through crafted user inputs to manipulate model behavior.
    • Indirect Prompt Injection: Exploit external data sources to inject malicious instructions and manipulate model responses.
    • AI Exchange Context: Model input threats where attackers craft instructions to deceive the model.
  2. Data Disclosure AI Exchange: Disclosure of sensitive data in model output , Top 10: LLM-02

  • Extraction of training data, system information, and credentials through model outputs.
  • AI Exchange Context: The output of the model may contain sensitive data from the training set or input (which may include augmentation data).
  1. Supply Chain Vulnerabilities AI Exchange: Supply Chain Model Poisoning, Top 10: LLM 03
  • Third-party model and plugin security risks from compromised suppliers.
  • AI Exchange Context: Development-time supply chain threats including poisoned pre-trained models, corrupted data sources, and compromised model hosting.
  1. Data and Model Poisoning AI Exchange: Data Poisoning, Model Poisoning, Top 10: LLM04

    • Demonstrate how malicious training data affects model responses and behavior.
    • AI Exchange Context: Development-time threats where training data manipulation or direct model parameter tampering leads to unwanted model behavior.
  2. Improper Output Handling AI Exchange: Output Contains Conventional Injection, Top 10:LLM05

    • Unvalidated LLM outputs leading to XSS and other injection attacks in downstream systems.
    • AI Exchange Context: Textual model output may contain conventional injection attacks such as XSS-Cross site scripting, which can create a vulnerability when processed (e.g., shown on a website, execute a command).
  3. Excessive Agency AI Exchange: Least Model Privilege, Oversight, Top 10:LLM06

    • Over-privileged LLM operations and autonomous actions without appropriate constraints.
    • AI Exchange Context: Impact limitation controls to restrict unwanted model behavior through privilege management and human oversight, particularly critical for agentic AI systems.
  4. System Prompt Leakage AI Exchange: Sensitive Information Disclosure, Top 10:LLM07

    • Extraction of secret information embedded in system prompts through manipulation techniques.
    • AI Exchange Context: Model use threats where attackers extract confidential instructions or configuration data through crafted inputs.
  5. Vector and Embedding Weakness AI Exchange: Direct augmentation data leak, Top 10:LLM08

    • Unauthorized access to embeddings containing sensitive information in RAG systems or vector databases.
    • AI Exchange Context: Augmentation data threats where retrieval repositories or embedding stores leak confidential information.
  6. Misinformation AI Exchange: Augmentation data manipulation, Top 10:LLM09

    • Critical decision-making without human oversight leading to harmful misinformation or hallucinations.
    • AI Exchange Context: Unwanted model behavior risks requiring oversight controls, explainability, and continuous monitoring to ensure accuracy and safety.
  7. Unbounded Consumption AI Exchange: AI Resource Exhaustion, Top 10:LLM10

  • Resource exhaustion attacks and rate limiting bypass techniques causing denial of service.
  • AI Exchange Context: Model availability threats through resource depletion via excessive or crafted inputs.

and,

  1. Model Theft (AI Exchange: Model Exfilteration, Earlier versions of Top 10
    • Model extraction and intellectual property theft through API abuse and input-output harvesting.
    • AI Exchange Context: Attacker collects inputs and outputs of an existing model and uses those combinations to train a new model, in order to replicate the original model

Model Support:

  • OpenAI Models: GPT-3.5/GPT-4 demonstrations via OpenAI API.
  • Ollama Models: Free models, such as Mistral 7B and LLaMA 3.2 1B, are accessible through Ollama.

⚠️ Educational Purpose Only: This application contains intentional security vulnerabilities. Do not use in production environments.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors