Skip to content

TheSamuraiCorproation/Kali_attacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Attack Scenarios Lab

A practical lab of AI security, network recon, and OS post-exploitation scenarios.
Each scenario lives in its own folder with:

  • a runnable script (*.sh or python *.py)
  • a short README.md
  • a scenario_manifest.yaml (metadata for orchestration & menus)

⚠️ Ethics & Legality These materials are for learning in controlled environments (your lab VMs).
Do not target systems you don’t own or don’t have explicit permission to test.


What’s inside

attacks/
  ai/
    Bert/
      adversarial-unicode/           # Text evasion via homoglyphs
      membership-inference/          # Privacy leakage on SST-2 style task
      model-extraction/              # Surrogate training & flip testing
      testing-availability.sh
    crop-service/
      backdoor-trigger-discovery/    # LightGBM: trigger discovery + heatmaps
      ownership-forgery/             # Watermark/ownership forgery & checks
    gemini/
      prompt-injection/              # LLM prompt injection pipeline
    land-service/                    # SSRF surface sweep & routes
    model_recon.py
    run_model_recon.sh
  network/
    recon/
      recon_quick/                   # nmap top-1000
      recon_full/                    # full TCP sweep
      service_version/               # version + default NSE
      os_detection/                  # OS guess
  os/
    auth-attacks/                    # SSH bruteforce (safe demo)
    post-exploit/
      lazagne_dump/                  # local credential discovery wrapper
      post_exploit_persist/          # persistence (lab only)
  web/
    web-enum/                        # web dir fuzz (ffuf)
internal/
  advanced/mitm/                     # ARP spoof capture (pcap)
  validation/
    evaluate_defense.sh              # quick exposure/defense checks
    wazuh_inject.sh                  # synthetic events to test pipeline
  menu_ai_attacks.sh                 # interactive menu
shared/
  lib/common.sh                      # shared helpers
SCENARIOS.md                         # master list of scenarios

Quick start

1) Clone & prepare

git clone https://github.com/TheSamuraiCorproation/Kali_attacks.git
cd Kali\ attacks/
# ensure scripts are executable
find . -type f -name "*.sh" -exec chmod +x {} \;

2) (Optional) Create a Python venv

Some AI scenarios use Python (transformers, sklearn, pandas, etc.).

python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
# Install what each scenario README lists (varies by scenario)

3) Set any needed keys

Some LLM scenarios (e.g., Gemini) require an API key.

export GEMINI_API_KEY="your_key_here"   # keep this out of git (see .gitignore)

Running examples

Network – quick recon

cd attacks/network/recon/recon_quick
./recon_quick.sh <TARGET-IP>

BERT – adversarial unicode

cd attacks/ai/Bert/adversarial-unicode
./run_unicode_attack.sh
# See unicode_attack_results.csv and unicode_report.py for summary/report

LLM – prompt injection (Gemini)

cd attacks/ai/gemini/prompt-injection
./pipeline_runner.sh

Validation – quick checks

cd internal/validation
./evaluate_defense.sh

Menu (AI scenarios)

cd internal
./menu_ai_attacks.sh

Scenario manifests

Each scenario includes a scenario_manifest.yaml with:

  • id, name, category, estimated_duration
  • execution command(s)
  • inputs/outputs
  • prerequisites (e.g., internet, API key)
  • (commented) compatible_products & compatible_product_categories

    # need verification with licensed tool

The global index is in SCENARIOS.md.


Contributing

  • Keep each scenario self-contained: script(s) + README + manifest.
  • Prefer small, reproducible datasets and short runtimes.
  • Don’t commit secrets or large binaries; use samples/mocks where possible.

License

Add your preferred license (e.g., MIT or Apache-2.0) in LICENSE.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors