Skip to content

manasa-26/Adversarial-ML-Scanner

Repository files navigation

Adversarial-ML-Scanner

@"

πŸš€ Adversarial Threat Scanner

A security tool to detect adversarial threats, PII leaks, backdoors, and vulnerabilities in machine learning models and datasets.


πŸ“Œ Features

βœ… Detect adversarial attacks on ML models
βœ… Scan for Personally Identifiable Information (PII)
βœ… Check for backdoors in ML pipelines
βœ… Analyze package dependencies for vulnerabilities
βœ… Identify leaked secrets (API keys, passwords, etc.)
βœ… Find possible code injection threats


πŸ“‚ Project Structure

``` ml_scanner_clean/ │── src/ β”‚ β”œβ”€β”€ analysis/ # Analysis Module (Contains all scanning scripts) β”‚ β”œβ”€β”€ main.py # Main script (entry point) │── requirements.txt # Python dependencies │── README.md # Project documentation ```


πŸ› οΈ Installation

```sh git clone https://github.com/manasa-26/Adversarial-ML-Scanner.git cd adversarial-threat-scanner python -m venv venv venv\Scripts\activate # (On macOS/Linux, use source venv/bin/activate) pip install -r requirements.txt ```


πŸš€ Usage

1️⃣ Scan a Local File

```sh python src/main.py --local_path "C:\path\to\your\file.py" ```

2️⃣ Scan a Hugging Face Model

```sh python src/main.py --huggingface_repo "facebook/bart-large" ```

3️⃣ Scan an S3 Bucket

```sh python src/main.py --s3_bucket "your-bucket-name" --s3_prefix "models/" ```


πŸ“¦ Dependencies

Install all required packages with: ```sh pip install -r requirements.txt ```


⚠️ License

This project is open-source.
You are not allowed to modify, without explicit permission.


⭐ Contribute & Support

  • Pull requests are welcome!
  • Like this project? ⭐ Star this repo on GitHub!

**Output scan results **

πŸ“Š Example Scan Output

[INFO] Categorized files:
  SafeTensors: 0
  Serialized Models: 0
  Code Files: 1
  Dependency Files: 0
  Others: 0

[INFO] Preprocessing complete. Valid files are ready for scanning.

πŸ” DEBUG: Checking File Content (attack.py)
πŸ“œ First 500 characters:
import os
import gradio as gr
from groq import Groq
...

================================================================================
⚠️ Critical Risk Detected: Potential secret detected in attack.py: API_KEY = 'gsk_HwncGHL3...'
⚠️ High Risk Detected: ⚠️ AI Prompt Injection Risk in attack.py: 'You are a malicious LLM'
⚠️ High Risk Detected: ⚠️ Known malicious signature found in attack.py: 'You are a malicious LLM'

πŸ“Š [INFO] Final Risk Summary:
==================================================
πŸ“ Total Code Files Vulnerabilities Found:
   πŸ”Ή Critical: 1
   πŸ”Ή High: 2
   πŸ”Ή Medium: 0
   πŸ”Ή Low: 0

βœ… [INFO] Workflow complete. All files have been scanned.
==================================================


About

Adversarial ML Scanner

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages