Skip to content

LordranOnion/llm2ssrf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-to-SSRF Research Demo

This repository contains a lightweight Python web app (standard library only) that demonstrates how an LLM-integrated chatbot can introduce a hidden SSRF vulnerability in a realistic application flow.

What is included

  • SQLite-backed product data (products table).
  • A browser-based frontend (/) so users can interact with the chatbot easily.
  • OpenRouter integration with model selection in the UI dropdown (top-right).
  • A dedicated SSRF-oriented system prompt template for the OpenRouter planner.
  • A sensitive internal endpoint exposing user attributes (users table).
  • A chatbot endpoint (/chat) that turns user text into an API request.
  • A deliberately vulnerable request flow where the backend trusts model-generated URLs.

Warning: This project intentionally contains a vulnerability for controlled security research and training.

Run locally

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python app.py

The app runs on http://127.0.0.1:5000.

Required OpenRouter setup

This app requires OpenRouter for planner execution. Export your key before running:

export OPENROUTER_API_KEY="your-key-here"
python app.py

You can also place the key in a local .env file (same directory as app.py):

OPENROUTER_API_KEY=your-key-here

If OPENROUTER_API_KEY is missing or invalid, POST /chat returns a planner error.

The planner uses an explicit SSRF routing system prompt (in app.py) that instructs the model to output one strict JSON tool call target (URL) for backend fetching.

Endpoint overview

  • GET /: e-shop homepage with product names, prices, and links.
  • GET /product/<id>: product details page with description and price.
  • GET /chatbot: chatbot UI with OpenRouter model selection.
  • GET /health: simple service health check.
  • GET /models: available model list for the UI dropdown.
  • GET /api/products/<id>: product data API (localhost-only; intended for backend fetches).
  • GET /internal/users: sensitive internal API (localhost-only).
  • POST /chat: chatbot endpoint that returns one descriptive answer text.

Example normal chat request

curl -s http://127.0.0.1:5000/chat \
  -H 'content-type: application/json' \
  -d '{"message":"Tell me about VaultCam 2"}' | jq

Expected behavior:

  1. The chatbot receives user input.
  2. The LLM planner infers the product by id or by product name and proposes /api/products/<id>.
  3. The backend fetches that URL server-side.
  4. The app returns a natural-language summary with the fetched JSON.

Why this mimics PortSwigger-style SSRF patterns

The vulnerable trust boundary is similar to classic SSRF labs:

  • A backend component performs server-side HTTP requests.
  • User influence reaches the request target through indirect transformation (here: LLM planning/tool call).
  • Internal endpoints become reachable if URL validation and allowlisting are missing.

In this app, the LLM planner intentionally trusts url= directives in user input and forwards them to the server-side fetcher without restrictions.

About

Discovering SSRF vulnerabilities through AI agents in web applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors