Stateful business logic vulnerability research tool. The Perception Layer silently observes a target web app's API traffic, scores each endpoint for vulnerability potential using an LLM, and outputs a structured red-team map.
Browser (Playwright) → Network Interceptor → Session Map → LLM Scorer → logic_map.json
| File | Role |
|---|---|
perception_layer.py |
Main script — browser, interception, orchestration |
llm_client.py |
Swappable LLM abstraction (OpenAI / Gemini / Placeholder) |
config.py |
All settings — target URL, LLM provider, thresholds |
logic_map.json |
Output — high-interest findings for red-teaming |
pip install -r requirements.txt
playwright install chromiumcp .env.example .env
# Edit .env — set TARGET_URL and optionally your LLM API keyMinimum config (no API key needed — uses offline placeholder scoring):
TARGET_URL=https://juice-shop.herokuapp.com
LLM_PROVIDER=placeholder
With OpenAI:
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
With Gemini:
LLM_PROVIDER=gemini
GEMINI_API_KEY=AIza...
python perception_layer.pyA browser window opens. Interact with the target app normally — log in, browse, perform actions. Every JSON API call is automatically captured, scored, and logged.
Press Ctrl+C to stop. A session summary is printed, and high-interest findings are in logic_map.json.
Each entry in the output file represents a high-interest API interaction:
{
"id": "uuid",
"timestamp": "2026-02-18T10:30:00Z",
"method": "POST",
"url": "https://target.app/api/orders/transfer",
"request_headers": { "content-type": "application/json", "cookie": "[REDACTED — 128 chars]" },
"request_body": { "fromAccount": "1001", "toAccount": "1002", "amount": 100 },
"response_status": 200,
"response_body": { "status": "success", "newBalance": 0 },
"interest_score": 9,
"llm_reasoning": "This endpoint transfers funds without an idempotency token...",
"vulnerability_hints": [
"POST request — state-changing, check for race conditions",
"High-value keyword 'transfer' in URL — prime IDOR/BLA target"
]
}| Class | What to Look For |
|---|---|
| IDOR | Numeric/UUID IDs in URL or body — swap them for another user's ID |
| Race Condition | Financial transactions, vote/like endpoints — send concurrent requests |
| BLA | Multi-step workflows — skip steps, replay tokens, abuse state transitions |
Change one line in .env:
LLM_PROVIDER=openai # or: gemini, placeholderTo add a new provider (e.g., Anthropic Claude):
- Add
_call_claude()inllm_client.py - Add a branch in
score_request_interest() - Add
ANTHROPIC_API_KEYtoconfig.pyand.env.example
| Variable | Default | Description |
|---|---|---|
TARGET_URL |
Juice Shop | Web app to observe |
HEADLESS |
false |
Headless browser mode |
SLOW_MO_MS |
0 |
Slow down browser actions (ms) |
LLM_PROVIDER |
placeholder |
openai / gemini / placeholder |
INTEREST_THRESHOLD |
7 |
Min score to write to logic_map.json |
OUTPUT_FILE |
logic_map.json |
Output file path |
LOG_LEVEL |
INFO |
DEBUG / INFO / WARNING |
This tool is intended for authorized security research only. Only use it against systems you own or have explicit written permission to test. Unauthorized use may violate computer fraud laws.