Skip to content

zarni99/logic-ghost

Repository files navigation

LogicGhost-AI — Perception Layer

Stateful business logic vulnerability research tool. The Perception Layer silently observes a target web app's API traffic, scores each endpoint for vulnerability potential using an LLM, and outputs a structured red-team map.


Architecture

Browser (Playwright) → Network Interceptor → Session Map → LLM Scorer → logic_map.json
File Role
perception_layer.py Main script — browser, interception, orchestration
llm_client.py Swappable LLM abstraction (OpenAI / Gemini / Placeholder)
config.py All settings — target URL, LLM provider, thresholds
logic_map.json Output — high-interest findings for red-teaming

Quick Start

1. Install dependencies

pip install -r requirements.txt
playwright install chromium

2. Configure

cp .env.example .env
# Edit .env — set TARGET_URL and optionally your LLM API key

Minimum config (no API key needed — uses offline placeholder scoring):

TARGET_URL=https://juice-shop.herokuapp.com
LLM_PROVIDER=placeholder

With OpenAI:

LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...

With Gemini:

LLM_PROVIDER=gemini
GEMINI_API_KEY=AIza...

3. Run

python perception_layer.py

A browser window opens. Interact with the target app normally — log in, browse, perform actions. Every JSON API call is automatically captured, scored, and logged.

Press Ctrl+C to stop. A session summary is printed, and high-interest findings are in logic_map.json.


Output: logic_map.json

Each entry in the output file represents a high-interest API interaction:

{
  "id": "uuid",
  "timestamp": "2026-02-18T10:30:00Z",
  "method": "POST",
  "url": "https://target.app/api/orders/transfer",
  "request_headers": { "content-type": "application/json", "cookie": "[REDACTED — 128 chars]" },
  "request_body": { "fromAccount": "1001", "toAccount": "1002", "amount": 100 },
  "response_status": 200,
  "response_body": { "status": "success", "newBalance": 0 },
  "interest_score": 9,
  "llm_reasoning": "This endpoint transfers funds without an idempotency token...",
  "vulnerability_hints": [
    "POST request — state-changing, check for race conditions",
    "High-value keyword 'transfer' in URL — prime IDOR/BLA target"
  ]
}

Vulnerability Classes Targeted

Class What to Look For
IDOR Numeric/UUID IDs in URL or body — swap them for another user's ID
Race Condition Financial transactions, vote/like endpoints — send concurrent requests
BLA Multi-step workflows — skip steps, replay tokens, abuse state transitions

Swapping LLM Providers

Change one line in .env:

LLM_PROVIDER=openai   # or: gemini, placeholder

To add a new provider (e.g., Anthropic Claude):

  1. Add _call_claude() in llm_client.py
  2. Add a branch in score_request_interest()
  3. Add ANTHROPIC_API_KEY to config.py and .env.example

Configuration Reference

Variable Default Description
TARGET_URL Juice Shop Web app to observe
HEADLESS false Headless browser mode
SLOW_MO_MS 0 Slow down browser actions (ms)
LLM_PROVIDER placeholder openai / gemini / placeholder
INTEREST_THRESHOLD 7 Min score to write to logic_map.json
OUTPUT_FILE logic_map.json Output file path
LOG_LEVEL INFO DEBUG / INFO / WARNING

Legal Notice

This tool is intended for authorized security research only. Only use it against systems you own or have explicit written permission to test. Unauthorized use may violate computer fraud laws.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages