A practical lab of AI security, network recon, and OS post-exploitation scenarios.
Each scenario lives in its own folder with:
- a runnable script (
*.shorpython *.py) - a short
README.md - a
scenario_manifest.yaml(metadata for orchestration & menus)
⚠️ Ethics & Legality These materials are for learning in controlled environments (your lab VMs).
Do not target systems you don’t own or don’t have explicit permission to test.
attacks/
ai/
Bert/
adversarial-unicode/ # Text evasion via homoglyphs
membership-inference/ # Privacy leakage on SST-2 style task
model-extraction/ # Surrogate training & flip testing
testing-availability.sh
crop-service/
backdoor-trigger-discovery/ # LightGBM: trigger discovery + heatmaps
ownership-forgery/ # Watermark/ownership forgery & checks
gemini/
prompt-injection/ # LLM prompt injection pipeline
land-service/ # SSRF surface sweep & routes
model_recon.py
run_model_recon.sh
network/
recon/
recon_quick/ # nmap top-1000
recon_full/ # full TCP sweep
service_version/ # version + default NSE
os_detection/ # OS guess
os/
auth-attacks/ # SSH bruteforce (safe demo)
post-exploit/
lazagne_dump/ # local credential discovery wrapper
post_exploit_persist/ # persistence (lab only)
web/
web-enum/ # web dir fuzz (ffuf)
internal/
advanced/mitm/ # ARP spoof capture (pcap)
validation/
evaluate_defense.sh # quick exposure/defense checks
wazuh_inject.sh # synthetic events to test pipeline
menu_ai_attacks.sh # interactive menu
shared/
lib/common.sh # shared helpers
SCENARIOS.md # master list of scenarios
git clone https://github.com/TheSamuraiCorproation/Kali_attacks.git
cd Kali\ attacks/
# ensure scripts are executable
find . -type f -name "*.sh" -exec chmod +x {} \;Some AI scenarios use Python (transformers, sklearn, pandas, etc.).
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
# Install what each scenario README lists (varies by scenario)Some LLM scenarios (e.g., Gemini) require an API key.
export GEMINI_API_KEY="your_key_here" # keep this out of git (see .gitignore)Network – quick recon
cd attacks/network/recon/recon_quick
./recon_quick.sh <TARGET-IP>BERT – adversarial unicode
cd attacks/ai/Bert/adversarial-unicode
./run_unicode_attack.sh
# See unicode_attack_results.csv and unicode_report.py for summary/reportLLM – prompt injection (Gemini)
cd attacks/ai/gemini/prompt-injection
./pipeline_runner.shValidation – quick checks
cd internal/validation
./evaluate_defense.shMenu (AI scenarios)
cd internal
./menu_ai_attacks.shEach scenario includes a scenario_manifest.yaml with:
id,name,category,estimated_duration- execution command(s)
- inputs/outputs
- prerequisites (e.g., internet, API key)
- (commented)
compatible_products&compatible_product_categories# need verification with licensed tool
The global index is in SCENARIOS.md.
- Keep each scenario self-contained: script(s) + README + manifest.
- Prefer small, reproducible datasets and short runtimes.
- Don’t commit secrets or large binaries; use samples/mocks where possible.
Add your preferred license (e.g., MIT or Apache-2.0) in LICENSE.