Skip to content

Omjee73/Vulnerability_Scanner

Repository files navigation

🔍 Vulnerability Scanner

A comprehensive automated vulnerability scanner with AI-powered recursive investigation, subdomain enumeration, and vulnerability assessment

FeaturesInstallationRecent UpdatesConfigurationUsage

Python Node.js Docker Flask React License


📋 Table of Contents


🎯 Project Overview

This Vulnerability Scanner is a full-stack application designed for comprehensive security reconnaissance and vulnerability assessment. It combines multiple industry-standard security tools with AI-powered analysis to provide deep insights into your target's security posture.

Core Technology Stack

Frontend:

  • ⚛️ React 18 + TypeScript
  • 🎨 TailwindCSS for modern UI
  • 📱 Vite 6.3.5 for fast builds with HMR
  • 🗺️ Mapbox GL for geolocation visualization
  • 🔔 react-hot-toast for notifications
  • 🎯 lucide-react for icons

Backend:

  • 🐍 Python 3.11 + Flask 3.1.0
  • 🔐 JWT authentication with HTTP-only cookies
  • 🗄️ MongoDB for data persistence
  • 🔄 Celery for asynchronous task execution
  • 📨 Redis as message broker
  • 📊 Real-time scan streaming

DevOps:

  • 🐳 Docker + Docker Compose
  • 🔄 Hot-reload enabled for development
  • 🌐 Multi-container architecture
  • 📦 Environment-aware tool execution

⚠️ LEGAL WARNING: This tool is intended for authorized security testing only. Scanning targets without explicit permission is illegal and unethical. Always obtain proper authorization before conducting any security assessments.


🆕 Recent Major Updates (January 2026)

1. Complete Asset Management System ✨

New Features:

  • Full CRUD Operations: Create, Read, Update, Delete assets
  • Dark Theme UI: Matching application's #1a1f3a background with #252b48 containers
  • Click-to-View Modal: Detailed asset information display with organized sections
  • Sidebar Navigation: Quick access to all saved assets with counts
  • Specific API Keys: Separate fields for Shodan and FOFA keys with hints
  • Optional Field Labels: Clear indication of required vs optional fields

Technical Implementation:

  • Frontend: Frontend/src/components/asset/AssetForm.tsx (627 lines)

    • Cookie-based authentication using credentials: 'include'
    • Comprehensive null/undefined safety checks
    • Fallback handling for legacy database schemas
    • Real-time asset list updates after operations
  • Backend: Backend/main.py

    • POST /api/assets (Line 489-540): Create assets with validation
    • GET /api/assets (Line 543-558): Fetch user's assets
    • DELETE /api/assets/<id> (Line 559-580): Delete with authorization
    • All endpoints use @jwt_required() decorator

Data Structure:

{
  "_id": ObjectId,
  "user_id": String,
  "companyName": String (required),
  "domains": Array<String>,
  "ipAddresses": Array<String>,
  "endpoints": Array<String>,
  "shodanKey": String (optional),
  "fofaKey": String (optional),
  "created_at": DateTime,
  "updated_at": DateTime
}

2. Fixed Authentication System 🔐

Previous Issue: Application was looking for localStorage tokens that don't exist.

Solution:

  • ✅ Converted to HTTP-only cookie authentication
  • ✅ All fetch requests now use credentials: 'include'
  • ✅ Removed localStorage token logic completely
  • ✅ Automatic cookie management by browser

Files Modified:

  • Frontend/src/components/asset/AssetForm.tsx: All API calls use cookies
  • Frontend/src/context/AuthContext.tsx: Uses /auth/me endpoint
  • Backend/main.py: JWT stored in HTTP-only cookies

Benefits:

  • 🛡️ More secure (XSS-resistant)
  • 🔄 Automatic token refresh
  • 🚫 No manual token management needed

3. Complete Security Tools Integration 🛠️

All Tools Now Fully Functional:

Tool Version Purpose Status
Nmap 7.95 Port scanning & service detection ✅ Working
Nikto Latest Web server vulnerability scanning ✅ Working
SQLMap Latest SQL injection detection ✅ Working
FFUF 2.1.0 Directory bruteforcing ✅ Working
testssl.sh Latest SSL/TLS security testing ✅ Working
WPScan Latest WordPress vulnerability scanning ✅ Working
WhatWeb Latest Technology fingerprinting ✅ Working
Subfinder Latest Subdomain enumeration ✅ Working
DNSx Latest DNS validation ✅ Working
HTTPx Latest HTTP probing ✅ Working
Nuclei Latest Vulnerability scanning ✅ Working

Installation Methods:

# Dockerfile.backend

# System packages (apt)
RUN apt update && apt install -y \
    curl unzip git wget \
    ruby ruby-dev \
    whatweb \
    dnsutils iputils-ping nmap

# GitHub installations
RUN git clone --depth 1 https://github.com/sullo/nikto /opt/nikto
RUN git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git /opt/sqlmap  
RUN git clone --depth 1 https://github.com/drwetter/testssl.sh.git /opt/testssl.sh

# Binary installations
RUN curl -L https://github.com/ffuf/ffuf/releases/download/v2.1.0/ffuf_2.1.0_linux_amd64.tar.gz | \
    tar -xzf - -C /usr/local/bin

4. Environment Detection System 🌍

Cross-Platform Tool Execution:

Created Backend/utils/automated_tools/env_helper.py for automatic environment detection:

def is_docker():
    """Detects if running in Docker container"""
    if os.path.exists('/.dockerenv'):
        return True
    with open('/proc/1/cgroup', 'r') as f:
        return 'docker' in f.read()

def get_command_prefix():
    """Returns appropriate command prefix"""
    if is_docker():
        return []  # Direct execution
    elif os.name == 'nt':
        return ['wsl', '-d', 'Ubuntu-22.04']  # Windows WSL
    else:
        return []  # Native Linux

Benefits:

  • ✅ Works in Docker containers
  • ✅ Works on Windows with WSL
  • ✅ Works on native Linux
  • ✅ No hardcoded paths
  • ✅ Automatic path detection

Files Using Environment Detection:

  • Backend/nmap.py: Nmap execution with 300s timeout
  • Backend/ffuf.py: FFUF path detection
  • Backend/utils/automated_tools/wpscan.py: WPScan wrapper
  • Backend/utils/automated_tools/nikto.py: Nikto wrapper
  • Backend/utils/automated_tools/testssl_runner.py: testssl.sh wrapper
  • Backend/utils/automated_tools/sqlmap.py: SQLMap wrapper (NEW)

5. SQLMap Integration 💉

New File: Backend/utils/automated_tools/sqlmap.py

Features:

  • ✅ Environment-aware execution
  • ✅ 120-second timeout with threading
  • ✅ ANSI color code stripping
  • ✅ Intelligent result summarization
  • ✅ Vulnerability detection in output

Usage:

from Backend.utils.automated_tools.sqlmap import run_sqlmap_scan

result = run_sqlmap_scan("sqlmap -u 'https://example.com' --batch --risk=1")

Integration:

  • Added to Backend/utils/tool_executor.py
  • Accessible via LLM command suggestions
  • Automatic timeout handling

6. Performance & Timeout Fixes ⏱️

Nmap Timeout Extended:

  • Before: 60 seconds (too short for comprehensive scans)
  • After: 300 seconds (5 minutes)
  • File: Backend/nmap.py line ~35

Why This Matters:

  • Large networks need time to scan
  • Prevents false "timeout" errors
  • Allows thorough service detection

7. Auto-Reload Development Mode 🔄

Enabled Flask Debug Mode:

# Dockerfile.backend
ENV FLASK_ENV=development
ENV FLASK_DEBUG=1

In main.py:

app.run(host="0.0.0.0", port=5000, debug=True)

Benefits:

  • ✅ Code changes reload automatically
  • ✅ No need to restart containers
  • ✅ Faster development cycle
  • ✅ Better error messages

8. Centralized API Configuration 🔧

New File: Frontend/src/config/api.ts

Purpose: Single source of truth for API endpoints

export const API_CONFIG = {
  baseURL: import.meta.env.VITE_API_BASE_URL || 'http://localhost:5000'
};

// Usage in components:
fetch(`${API_CONFIG.baseURL}/api/assets`)

Files Using API_CONFIG:

  • Frontend/src/components/asset/AssetForm.tsx
  • Frontend/src/components/scan/*.tsx
  • Frontend/src/context/AuthContext.tsx

Benefits:

  • ✅ Easy environment switching
  • ✅ No hardcoded URLs
  • ✅ Type-safe imports

9. Background Job System with Celery & Redis 🔄 NEW (January 15, 2026)

Complete Asynchronous Task Processing System

Infrastructure Added:

  • Redis - Message broker on port 6379
  • Celery Worker - Background task executor (12 concurrent workers)
  • Celery Beat - Task scheduler for periodic scans

Docker Services:

# docker-compose.yml
redis:
  image: redis:7-alpine
  ports:
    - "6379:6379"

celery-worker:
  command: celery -A tasks worker --loglevel=info
  depends_on:
    - redis
    - mongo

celery-beat:
  command: celery -A tasks beat --loglevel=info
  depends_on:
    - redis
    - celery-worker

Scheduled Tasks:

  • 📅 Daily Subdomain Scan - Runs at 3:40 PM IST
  • 🎯 Domain: Configurable via AUTO_SCAN_DOMAIN in .env
  • 📊 Auto-saves to MongoDB collection scan_results_subfinder

New API Endpoints:

1. Trigger Background Scan

POST /api/trigger_background_scan
Authorization: Bearer <jwt_token>
Content-Type: application/json

{
  "domain": "example.com"  // Optional, uses AUTO_SCAN_DOMAIN if not provided
}

Response (202):
{
  "status": "success",
  "message": "Background scan queued for domain: example.com",
  "task_id": "6356dd8d-ff14-4e79-bb2e-4091e1699638"
}

2. Check Scan Status

GET /api/background_scan_status/<task_id>
Authorization: Bearer <jwt_token>

Response:
{
  "status": "completed",
  "result": {
    "status": "completed",
    "domain": "ds.study.iitm.ac.in",
    "subdomains_found": 1,
    "subdomains_stored": 1,
    "scan_id": "20260115151710_ce5b67d6-60ac-43ac-b2d2-5eb624ff2cf2"
  }
}

Improved Logging:

[AUTO-SCAN] ✅ Starting scan session: 20260115151710_ce5b... for domain: example.com
[AUTO-SCAN] 📍 Using tools from: /app/tools
[AUTO-SCAN] 🔍 Running subfinder for domain: example.com
[AUTO-SCAN] ✅ Found 5 subdomains
[AUTO-SCAN] 🔄 Processing subdomain 1/5: api.example.com
[AUTO-SCAN]   ✅ DNSX completed for api.example.com
[AUTO-SCAN]   ✅ HTTPX completed for api.example.com
[AUTO-SCAN]   🔍 Running Nuclei scan for https://api.example.com
[AUTO-SCAN]   ✅ Nuclei found 2 vulnerabilities
[AUTO-SCAN]   ✅ Saved to MongoDB (1/5)
[AUTO-SCAN] 🎉 SCAN COMPLETED for example.com
[AUTO-SCAN] 📊 Results: 5/5 subdomains stored successfully

Files Modified:

  • Backend/tasks.py - Celery task definitions
  • Backend/utils/scanner_task_runner.py - Improved logging, fixed subprocess calls
  • Backend/main.py - Added background scan API endpoints
  • docker-compose.yml - Added redis, celery-worker, celery-beat services
  • .env - Added AUTO_SCAN_DOMAIN, CELERY_BROKER_URL, CELERY_RESULT_BACKEND

Configuration:

# .env file
AUTO_SCAN_DOMAIN=ds.study.iitm.ac.in
CELERY_BROKER_URL=redis://redis:6379/0
CELERY_RESULT_BACKEND=redis://redis:6379/0
TZ=Asia/Kolkata

Monitoring Commands:

# Check running containers
docker ps

# Watch Celery worker logs
docker logs celery-worker -f

# Watch Celery beat (scheduler) logs
docker logs celery-beat -f

# Test Redis connectivity
docker exec -it redis redis-cli ping

# Manually trigger a scan
docker exec celery-worker python -c "from tasks import periodic_subdomain_scan; periodic_subdomain_scan.delay('example.com')"

Benefits:

  • ✅ Non-blocking scans - API returns immediately
  • ✅ Scheduled periodic scans
  • ✅ Task status tracking
  • ✅ Scalable architecture
  • ✅ Clear error messages with emojis
  • ✅ Result persistence in Redis

✨ Key Features

🔎 Reconnaissance & Discovery

  • Subdomain Enumeration

    • Knockpy integration for comprehensive subdomain discovery
    • Subfinder + DNSx + HTTPx Pipeline - Modern automated workflow
      • Subfinder: Multi-source passive subdomain discovery
      • DNSx: Fast DNS resolution and validation
      • HTTPx: HTTP/HTTPS probing with technology detection
      • Nuclei: Automated vulnerability scanning
    • Certificate transparency log analysis
    • DNS record enumeration
    • Real-time streaming results
  • Network Analysis

    • Nmap port scanning with service detection
    • HTTPx for HTTP/HTTPS probing
    • DNSx for DNS enumeration
    • Banner grabbing and service fingerprinting
  • Technology Detection

    • WhatWeb for CMS and technology identification
    • Wappalyzer integration
    • HTTP header analysis
    • SSL/TLS certificate inspection
  • Infrastructure Mapping

    • IP geolocation and ASN tracking
    • DNS and mail server diagnostics via MXToolbox API
    • Certificate chain validation
    • WHOIS information gathering

🛡️ Vulnerability Assessment

  • 🔒 OWASP ZAP Integration

    • Spider scan for comprehensive crawling
    • Active vulnerability scanning
    • Passive vulnerability detection
    • Custom scan policies
  • ⚙️ Automated Scanning

    • Celery-based asynchronous task execution
    • Redis message broker for task queuing
    • Parallel scan execution
    • Scheduled periodic scans
  • 🎯 Security Testing

    • Nikto web server scanner
    • Directory brute-forcing with FFuF
    • SSL/TLS security testing with testssl.sh
    • WordPress vulnerability scanning with WPScan

📊 Reporting & Analytics

  • 📄 Comprehensive PDF Reports

    • Executive summary with risk ratings
    • Detailed findings with CVE references
    • Remediation recommendations
    • Evidence screenshots and proof-of-concept
  • 📈 Historical Analysis Dashboard

    • Subdomain growth tracking over time
    • Service and port change detection
    • Technology stack evolution
    • Certificate lifecycle monitoring
    • Vulnerability trend analysis
  • 🔔 Email Notifications

    • Scan completion alerts
    • Critical vulnerability notifications
    • Account verification
    • Password reset functionality

👥 User Management

  • 🔐 Authentication System

    • JWT-based authentication
    • Email verification with OTP
    • Secure password hashing
    • Session management
  • 📊 Multi-tenant Support

    • User-specific scan history
    • Role-based access control
    • Asset management per user
    • Private scan results

🏗️ Architecture

┌─────────────────────────────────────────────────────────────────┐
│                         Frontend (React)                        │
│  ┌──────────────┐  ┌──────────────┐  ┌────────────────────┐   │
│  │  Dashboard   │  │   Scan UI    │  │  Report Viewer     │   │
│  └──────────────┘  └──────────────┘  └────────────────────┘   │
└────────────────────────────┬────────────────────────────────────┘
                             │ REST API
┌────────────────────────────┴────────────────────────────────────┐
│                      Backend (Flask)                            │
│  ┌──────────────┐  ┌──────────────┐  ┌────────────────────┐   │
│  │  API Routes  │  │  Auth System │  │  PDF Generator     │   │
│  └──────┬───────┘  └──────────────┘  └────────────────────┘   │
│         │                                                        │
│  ┌──────┴───────────────────────────────────────────────────┐  │
│  │              Celery Task Queue                           │  │
│  │  ┌────────┐  ┌────────┐  ┌────────┐  ┌────────┐        │  │
│  │  │  Nmap  │  │Knockpy │  │  ZAP   │  │  FFuF  │  ...   │  │
│  │  └────────┘  └────────┘  └────────┘  └────────┘        │  │
│  └──────────────────────────────────────────────────────────┘  │
└────────────┬────────────────┬──────────────────┬────────────────┘
             │                │                  │
    ┌────────┴────────┐  ┌───┴─────┐  ┌────────┴─────────┐
    │   MongoDB       │  │  Redis  │  │  External APIs   │
    │  (Database)     │  │ (Broker)│  │  (Cohere, etc.)  │
    └─────────────────┘  └─────────┘  └──────────────────┘

🔧 Configuration Modules NEW ✨

The application now includes centralized configuration modules for better maintainability and security.

Frontend Configuration Structure

Frontend/
├── .env.example          # Environment variable template
├── .env                  # Your local config (not in git)
├── src/
│   ├── config/
│   │   ├── api.ts        # API endpoint configuration
│   │   └── mapbox.ts     # Mapbox token configuration
│   ├── components/
│   └── ...

API Configuration (Frontend/src/config/api.ts)

Centralizes all API endpoints and provides helper functions:

import { getApiUrl, API_CONFIG } from '@/config/api';

// Use centralized configuration
const response = await fetch(
  getApiUrl(API_CONFIG.endpoints.scanSubdomain),
  { method: 'POST', body: JSON.stringify(data) }
);

Benefits:

  • ✅ Single source of truth for API URLs
  • ✅ Environment-based configuration (dev/staging/prod)
  • ✅ Easy to update all endpoints at once
  • ✅ Type-safe endpoint references

Mapbox Configuration (Frontend/src/config/mapbox.ts)

Manages Mapbox token and map settings:

import { MAPBOX_CONFIG } from '@/config/mapbox';
import mapboxgl from 'mapbox-gl';

// Use configured token
mapboxgl.accessToken = MAPBOX_CONFIG.accessToken;

Benefits:

  • ✅ Secure token management via environment variables
  • ✅ Centralized map configuration (style, zoom, defaults)
  • ✅ Token validation helpers
  • ✅ Easy to switch between development/production tokens

Migration from Hardcoded Values

For developers updating existing code, see Frontend/MIGRATION_EXAMPLE.ts for conversion examples from hardcoded URLs to the new configuration system.



🤖 AI-Powered Recursive Recon & Investigation

The AI-driven loop makes reconnaissance iterative and focused while maintaining safety controls:

graph LR
    A[Initial Recon] --> B[LLM Analysis]
    B --> C[Generate Commands]
    C --> D[Sandboxed Execution]
    D --> E[Results Analysis]
    E --> B
Loading

How It Works

  1. 📥 Initial Analysis: Feed reconnaissance output (subdomains, open ports, headers, certificates, tech stack) into an LLM
  2. 💡 Smart Suggestions: LLM suggests next-step investigative commands (targeted HTTP probes, specific ZAP scans, banner grabs)
  3. 🔐 Sandboxed Execution: Execute suggested commands in a controlled, rate-limited environment
  4. 🔄 Iterative Refinement: Return results to LLM for analysis and refined recommendations
  5. ♻️ Recursive Loop: Each iteration becomes smarter and more focused based on previous findings

🔒 Critical Safety Note: Run the AI loop only in a tightly controlled sandbox with strict network egress controls, resource limits, and explicit scope/rate restrictions to prevent accidental or abusive scanning. Logging, human approval gates, and kill-switches are strongly recommended.


📦 Prerequisites

Before you begin, ensure you have the following installed on your system:

Required Software

Software Minimum Version Purpose Installation Link
Python 3.9+ Backend runtime Download Python
Node.js 16+ Frontend runtime Download Node.js
npm 8+ Package manager Comes with Node.js
MongoDB 5.0+ Database Download MongoDB
Redis 6.0+ Task queue broker Download Redis
Docker 20+ Container runtime (optional) Download Docker
Git 2.30+ Version control Download Git

Security Tools

These tools should be installed and accessible in your system PATH or specified in TOOLS_DIR:

Tool Purpose Installation
Nmap Port scanning sudo apt install nmap or Download
Subfinder Subdomain enumeration See Subfinder Setup below
HTTPx HTTP probing See Subfinder Setup below
DNSx DNS enumeration See Subfinder Setup below
Nuclei Vulnerability scanner See Subfinder Setup below
Knockpy DNS reconnaissance pip install knockpy
WhatWeb Technology detection sudo apt install whatweb
Nikto Web vulnerability scanner sudo apt install nikto
FFuF Fuzzing tool GitHub
testssl.sh SSL/TLS testing GitHub
WPScan WordPress scanner GitHub
OWASP ZAP Vulnerability scanner See ZAP Setup below

API Keys Required

You'll need API keys from the following services:

  1. MXToolbox API - DNS and mail server diagnostics

  2. DNSDumpster API - DNS reconnaissance

  3. Cohere API - AI-powered analysis

  4. SMTP Server - Email notifications

    • Gmail App Password (recommended)
    • Or any SMTP provider

🚀 Quick Start

Method 1: Docker Deployment (Recommended)

This is the easiest way to get started with all services pre-configured.

Step 1: Clone the Repository

git clone https://github.com/Omjee73/Vulnerability_Scanner.git
cd Vulnerability_Scanner

Step 2: Configure Environment Variables

Create a .env file in the root directory:

# Copy the example file
cp .envexample .env

# Edit with your preferred editor
nano .env  # or notepad .env on Windows

Fill in your configuration (see Configuration section for details).

Step 3: Start with Docker Compose

# Build and start all services
docker-compose up --build

# Or run in detached mode
docker-compose up -d

This will start:

  • Frontend on http://localhost:3000
  • Backend API on http://localhost:5000
  • MongoDB on localhost:27017
  • Redis on localhost:6379

Step 4: Access the Application

Open your browser and navigate to:

http://localhost:3000

Create an account and start scanning!

💡 Quick Tip: For Subfinder scanning, you'll need to install additional tools. See the Subfinder Pipeline Setup section for detailed instructions.


Method 2: Manual Installation

For development or customization, you can install components individually.

Step 1: Clone the Repository

git clone https://github.com/Omjee73/Vulnerability_Scanner.git
cd Vulnerability_Scanner

Step 2: Backend Setup

# Navigate to Backend directory
cd Backend

# Create virtual environment
python -m venv venv

# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate

# Install Python dependencies
pip install --upgrade pip
pip install -r requirements.txt

# Create .env file
cp .envexample .env
# Edit .env with your configuration (see Configuration section)

Step 3: Frontend Setup

Open a new terminal:

# Navigate to Frontend directory
cd Frontend

# Install Node.js dependencies
npm install

# Create production build (optional)
npm run build

# Or start development server
npm run dev

Step 4: Start MongoDB

# On Linux/macOS:
sudo systemctl start mongod

# On Windows (if installed as service):
net start MongoDB

# Or using Docker:
docker run -d -p 27017:27017 --name mongodb mongo:latest

Step 5: Start Redis

# On Linux:
sudo systemctl start redis

# On macOS:
brew services start redis

# On Windows or using Docker:
docker run -d -p 6379:6379 --name redis redis:latest

Step 6: Start Backend Server

In your backend terminal (with virtual environment activated):

# Start Flask application
python main.py

The backend API will be available at http://localhost:5000

Step 7: Start Celery Workers

Open another terminal in the Backend directory:

# Activate virtual environment
source venv/bin/activate  # or venv\Scripts\activate on Windows

# Start Celery worker for task processing
celery -A celery_worker.celery_app worker --loglevel=info --pool=solo

# Optional: Start Celery Beat for scheduled tasks (in another terminal)
celery -A celery_worker.celery_app beat --loglevel=info

Note for Windows Users: Use --pool=solo flag with Celery worker on Windows as it doesn't support the default prefork pool.

Step 8: Access the Application

Open your browser and navigate to:

http://localhost:3000

📦 Asset Management System NEW ✨

The application now includes a comprehensive Asset Management system for organizing and tracking your security testing targets.

Features

  • Create, Read, Update, Delete (CRUD) operations
  • Organized Storage - All assets linked to your user account
  • Quick Access - Click any asset to view full details
  • API Key Management - Store Shodan and FOFA keys per asset
  • Multi-Target Support - Multiple domains, IPs, and endpoints per asset

Asset Structure

Each asset can contain:

Field Type Required Description
Company Name String ✅ Yes Organization or project name
Domains Array ❌ Optional List of domain names (e.g., example.com)
IP Addresses Array ❌ Optional List of IP addresses to scan
Endpoints Array ❌ Optional Specific URLs or API endpoints
Shodan Key String ❌ Optional Your Shodan API key for this asset
FOFA Key String ❌ Optional Your FOFA API key for this asset

Using Asset Management

1. Navigate to Assets Page

From the dashboard, click "Assets" in the navigation menu or visit:

http://localhost:3000/assets

2. Create a New Asset

Click the "Add New Asset" button and fill in the form:

Company Name: Acme Corporation          [Required]
Domains: acme.com, www.acme.com        [Optional - comma-separated]
IP Addresses: 192.168.1.1, 10.0.0.1    [Optional - comma-separated]
Endpoints: https://api.acme.com/v1     [Optional - comma-separated]
Shodan API Key: **********************  [Optional]
FOFA API Key: ************************  [Optional]

Click "Save Asset" to store.

3. View Asset Details

Click on any asset card in the list to open a modal with full details:

  • All domains, IPs, and endpoints displayed
  • API keys shown (with masking for security)
  • Creation and update timestamps
  • Quick copy buttons for values

4. Delete an Asset

Click the trash icon (🗑️) on an asset card and confirm deletion.

Asset Data Flow

┌─────────────────────────────────────────────────────────────┐
│                      Frontend UI                            │
│  ┌────────────┐  ┌────────────┐  ┌────────────┐           │
│  │ Asset Form │  │ Asset List │  │ Asset Modal│           │
│  └─────┬──────┘  └─────┬──────┘  └─────┬──────┘           │
└────────┼───────────────┼───────────────┼──────────────────┘
         │               │               │
         ▼               ▼               ▼
    POST /api/assets  GET /api/assets  View Details
         │               │               
         ▼               ▼               
┌─────────────────────────────────────────────────────────────┐
│                    Backend API (Flask)                      │
│  ┌──────────────────────────────────────────────────────┐  │
│  │  @jwt_required() - Validates HTTP-only cookie        │  │
│  └──────────────────────────────────────────────────────┘  │
│                           │                                 │
│                           ▼                                 │
│  ┌──────────────────────────────────────────────────────┐  │
│  │  Asset Operations:                                    │  │
│  │  - Create: user_id + asset data → MongoDB           │  │
│  │  - Read: filter by user_id                           │  │
│  │  - Delete: verify user_id matches before deletion    │  │
│  └──────────────────────────────────────────────────────┘  │
└───────────────────────────┬─────────────────────────────────┘
                            │
                            ▼
                   ┌─────────────────┐
                   │    MongoDB      │
                   │  "assets" coll  │
                   └─────────────────┘

API Endpoints

Create Asset:

POST /api/assets
Content-Type: application/json
Cookie: access_token_cookie=<jwt>

{
  "companyName": "Example Corp",
  "domains": ["example.com", "www.example.com"],
  "ipAddresses": ["192.168.1.1"],
  "endpoints": ["https://api.example.com"],
  "shodanKey": "optional-shodan-key",
  "fofaKey": "optional-fofa-key"
}

Get All User Assets:

GET /api/assets
Cookie: access_token_cookie=<jwt>

Response: [
  {
    "_id": "65abc123...",
    "user_id": "user123",
    "companyName": "Example Corp",
    "domains": ["example.com"],
    ...
  }
]

Delete Asset:

DELETE /api/assets/<asset_id>
Cookie: access_token_cookie=<jwt>

Frontend Components

Main Component: Frontend/src/components/asset/AssetForm.tsx

Key Features:

  • Cookie-based authentication with credentials: 'include'
  • Real-time validation and error handling
  • Toast notifications for user feedback
  • Automatic asset list refresh after operations
  • Modal view for detailed asset information
  • Dark theme matching application style (#1a1f3a, #252b48)

Backend Implementation

File: Backend/main.py

Create Asset (Lines 489-540):

@app.route('/api/assets', methods=['POST'])
@jwt_required()
def create_asset():
    current_user = get_jwt_identity()
    data = request.get_json()
    
    asset = {
        "user_id": current_user,
        "companyName": data.get('companyName'),
        "domains": data.get('domains', []),
        "ipAddresses": data.get('ipAddresses', []),
        "endpoints": data.get('endpoints', []),
        "shodanKey": data.get('shodanKey'),
        "fofaKey": data.get('fofaKey'),
        "created_at": datetime.utcnow(),
        "updated_at": datetime.utcnow()
    }
    
    result = collection_assets.insert_one(asset)
    return jsonify({"message": "Asset created", "id": str(result.inserted_id)})

Get Assets (Lines 543-558):

@app.route('/api/assets', methods=['GET'])
@jwt_required()
def get_assets():
    current_user = get_jwt_identity()
    assets = list(collection_assets.find({"user_id": current_user}))
    
    # Convert ObjectId to string
    for asset in assets:
        asset['_id'] = str(asset['_id'])
    
    return jsonify(assets)

Delete Asset (Lines 559-580):

@app.route('/api/assets/<asset_id>', methods=['DELETE'])
@jwt_required()
def delete_asset(asset_id):
    current_user = get_jwt_identity()
    
    # Verify ownership before deletion
    result = collection_assets.delete_one({
        "_id": ObjectId(asset_id),
        "user_id": current_user
    })
    
    if result.deleted_count == 0:
        return jsonify({"error": "Asset not found"}), 404
    
    return jsonify({"message": "Asset deleted successfully"})

Security Features

  • 🔐 JWT Authentication Required - All endpoints protected
  • 🍪 HTTP-only Cookies - XSS-resistant token storage
  • 👤 User Isolation - Users can only access their own assets
  • Ownership Verification - DELETE operations verify user_id
  • 🛡️ Input Validation - All data sanitized before storage

🔐 Authentication System

The application uses JWT (JSON Web Tokens) stored in HTTP-only cookies for secure authentication.

Authentication Flow

┌──────────────┐
│   Register   │
│   /signup    │
└──────┬───────┘
       │
       ▼
┌──────────────────────────┐
│  Email Verification OTP  │
│  /verify-email           │
└──────┬───────────────────┘
       │
       ▼
┌──────────────┐
│    Login     │
│   /login     │
└──────┬───────┘
       │
       ▼
┌──────────────────────────────────────┐
│  JWT Token Set in HTTP-only Cookie   │
│  Set-Cookie: access_token_cookie=... │
└──────┬───────────────────────────────┘
       │
       ▼
┌──────────────────────────────────────┐
│  All Requests Include Cookie         │
│  fetch(url, {credentials: 'include'}) │
└───────────────────────────────────────┘

Why HTTP-only Cookies?

Method Security Auto-Management XSS Protection
localStorage ❌ Low ❌ Manual ❌ Vulnerable
HTTP-only Cookie ✅ High ✅ Automatic ✅ Protected

Benefits:

  • Cannot be accessed by JavaScript (XSS protection)
  • Automatically sent with every request
  • Managed by browser
  • More secure than localStorage

Frontend Authentication

All API calls must include credentials: 'include':

const response = await fetch(`${API_CONFIG.baseURL}/api/assets`, {
  method: 'GET',
  headers: {
    'Content-Type': 'application/json',
  },
  credentials: 'include'  // Critical for cookie authentication
});

Backend Authentication

Protected routes use @jwt_required() decorator:

from flask_jwt_extended import jwt_required, get_jwt_identity

@app.route('/api/protected', methods=['GET'])
@jwt_required()
def protected_route():
    current_user = get_jwt_identity()
    return jsonify({"user": current_user})

🔑 API Keys Setup

1. MXToolbox API Key

Purpose: DNS and mail server diagnostics, checking email security configurations

Steps to obtain:

  1. Visit https://mxtoolbox.com/api/
  2. Click "Sign Up" or "Get API Key"
  3. Choose a plan (Free tier available with 100 requests/month)
  4. Complete registration
  5. Navigate to API dashboard to get your API key
  6. Copy the API key to your .env file
MXTOOLBOX_API_KEY=your_mxtoolbox_api_key_here

2. DNSDumpster API Key

Purpose: Comprehensive DNS reconnaissance and subdomain discovery

Steps to obtain:

  1. Visit https://dnsdumpster.com/
  2. Create an account or sign in
  3. Navigate to API settings
  4. Generate a new API key
  5. Copy to your .env file
DNSDUMPSTER_API_KEY=your_dnsdumpster_api_key_here

3. Cohere API Key

Purpose: AI-powered vulnerability analysis, CVE lookup, and intelligent threat assessment

Steps to obtain:

  1. Visit https://cohere.ai/
  2. Click "Get Started" or "Sign Up"
  3. Complete registration (can use Google/GitHub)
  4. Navigate to Dashboard → API Keys
  5. Click "Create API Key"
  6. Name your key (e.g., "Vulnerability Scanner")
  7. Copy the API key immediately (it won't be shown again)
  8. Add to your .env file
COHERE_API_KEY=your_cohere_api_key_here

Free Tier Details:

  • 100 API calls per minute
  • 1000 API calls per month
  • Perfect for testing and small-scale scans

4. Email (SMTP) Configuration

Purpose: Send verification emails, scan completion notifications, and alerts

Option A: Gmail (Recommended for testing)

  1. Enable 2-Factor Authentication on your Google account

  2. Generate an App Password:

  3. Add to .env:

MAIL_SERVER=smtp.gmail.com
MAIL_PORT=587
MAIL_USE_TLS=True
MAIL_USERNAME=your_email@gmail.com
MAIL_PASSWORD=your_16_character_app_password

Option B: Other SMTP Providers

  • Outlook/Hotmail: smtp-mail.outlook.com:587
  • Yahoo: smtp.mail.yahoo.com:587
  • SendGrid: smtp.sendgrid.net:587
  • Mailgun: smtp.mailgun.org:587

5. OWASP ZAP API Key

Purpose: Enable programmatic control of ZAP vulnerability scanner

Steps to configure:

  1. Start OWASP ZAP application
  2. Go to Tools → Options → API
  3. Either:
    • Check "Enable API" and note the auto-generated key
    • Or set a custom API key
  4. Add to .env:
ZAP_ENABLED=true
ZAP_ADDRESS=localhost  # or 'zap' for Docker
ZAP_PORT=8080
ZAP_API_KEY=your_zap_api_key_here

Note: To disable ZAP scanning, set ZAP_ENABLED=false in your .env file


⚙️ Configuration

Environment Variables

You need to configure environment variables in two locations:

1. Root .env (for Docker deployment)

Location: ./env (root of repository)

# ==========================================
# ROOT ENVIRONMENT FOR DOCKER-COMPOSE
# ==========================================

# Flask-Mail Configuration
MAIL_SERVER=smtp.gmail.com
MAIL_PORT=587
MAIL_USE_TLS=True
MAIL_USERNAME=your_email@gmail.com
MAIL_PASSWORD=your_app_password_here

# JWT Secret Key (generate a strong random string)
JWT_SECRET_KEY=your_super_secret_jwt_key_here_change_this

# MongoDB Connection (for Docker use 'mongo' as hostname)
MONGO_URI=mongodb://mongo:27017/subdomain_scanner

# Tools Directory (absolute path inside container)
TOOLS_DIR=/app/tools

# API Keys
MXTOOLBOX_API_KEY=your_mxtoolbox_api_key
DNSDUMPSTER_API_KEY=your_dnsdumpster_api_key
COHERE_API_KEY=your_cohere_api_key

# OWASP ZAP Configuration (optional)
ZAP_ENABLED=true
ZAP_ADDRESS=zap  # Docker service name
ZAP_PORT=8080
ZAP_API_KEY=your_zap_api_key

2. Backend .env (for local development)

Location: Backend/.env

# ==========================================
# BACKEND LOCAL DEVELOPMENT ENVIRONMENT
# ==========================================

# Flask-Mail Configuration
MAIL_SERVER=smtp.gmail.com
MAIL_PORT=587
MAIL_USE_TLS=True
MAIL_USERNAME=your_email@gmail.com
MAIL_PASSWORD=your_app_password_here

# JWT Secret Key
JWT_SECRET_KEY=your_super_secret_jwt_key_here_change_this

# MongoDB Connection (local installation)
MONGO_URI=mongodb://localhost:27017/subdomain_scanner

# Tools Directory (absolute path on your system)
TOOLS_DIR=C:/path/to/your/tools  # Windows
# TOOLS_DIR=/usr/local/bin         # Linux/macOS

# API Keys
MXTOOLBOX_API_KEY=your_mxtoolbox_api_key
DNSDUMPSTER_API_KEY=your_dnsdumpster_api_key
COHERE_API_KEY=your_cohere_api_key

# OWASP ZAP Configuration
ZAP_ENABLED=true
ZAP_ADDRESS=localhost
ZAP_PORT=8080
ZAP_API_KEY=your_zap_api_key

3. Frontend .env (for frontend configuration) NEW ✨

Location: Frontend/.env

The frontend now supports environment-based configuration for better security and flexibility.

# ==========================================
# FRONTEND ENVIRONMENT CONFIGURATION
# ==========================================

# Backend API URL
# For development: http://localhost:5000
# For production: https://your-production-api.com
VITE_API_URL=http://localhost:5000

# Mapbox Access Token (for IP geolocation maps)
# Get your token from: https://account.mapbox.com/access-tokens/
VITE_MAPBOX_TOKEN=your_mapbox_public_token_here

Setup Instructions:

  1. Copy the example file:

    cd Frontend
    cp .env.example .env
  2. Get Mapbox Token (Required for map features):

    • Visit https://account.mapbox.com/
    • Sign up or log in (free tier available)
    • Navigate to "Access Tokens"
    • Create a new public token or use default
    • Copy token to VITE_MAPBOX_TOKEN in .env
  3. Configure API URL:

    • For local development: Keep http://localhost:5000
    • For production: Update to your deployed backend URL

Note: After changing .env, restart the Vite dev server (npm run dev) for changes to take effect.

Configuration Variables Explained

Variable Description Example Required
MAIL_SERVER SMTP server hostname smtp.gmail.com Yes
MAIL_PORT SMTP server port 587 Yes
MAIL_USE_TLS Enable TLS encryption True Yes
MAIL_USERNAME Email address for sending your_email@gmail.com Yes
MAIL_PASSWORD Email password/app password abcd efgh ijkl mnop Yes
JWT_SECRET_KEY Secret for JWT tokens Random string (32+ chars) Yes
MONGO_URI MongoDB connection string mongodb://localhost:27017/db_name Yes
TOOLS_DIR Path to security tools /usr/local/bin or C:/tools Yes
MXTOOLBOX_API_KEY MXToolbox API key uuid-format-key Yes
DNSDUMPSTER_API_KEY DNSDumpster API key hex-string Yes
COHERE_API_KEY Cohere AI API key alphanumeric-key Yes
ZAP_ENABLED Enable/disable ZAP scanning true or false No
ZAP_ADDRESS ZAP proxy address localhost or zap If ZAP enabled
ZAP_PORT ZAP proxy port 8080 If ZAP enabled
ZAP_API_KEY ZAP API authentication key random-string If ZAP enabled
Frontend Variables NEW ✨
VITE_API_URL Backend API endpoint URL http://localhost:5000 Yes
VITE_MAPBOX_TOKEN Mapbox public access token pk.eyJ1Ijo... Yes (for maps)

Generating Secure JWT Secret Key

Use one of these methods to generate a strong JWT secret:

# Python
python -c "import secrets; print(secrets.token_urlsafe(32))"

# OpenSSL
openssl rand -base64 32

# Node.js
node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"

Security Best Practices

  1. Never commit .env files - They're already in .gitignore
  2. Use strong JWT secrets - Minimum 32 characters, random
  3. Use app passwords - For Gmail, generate app-specific passwords
  4. Rotate keys regularly - Change API keys and secrets periodically
  5. Limit API key permissions - Use least-privilege principle
  6. Use environment-specific configs - Different keys for dev/prod

🎯 Subfinder Pipeline Setup

The Subfinder pipeline is a modern, automated subdomain reconnaissance workflow that combines multiple ProjectDiscovery tools for comprehensive results. This section provides detailed setup instructions for first-time users.

What is the Subfinder Pipeline?

The Subfinder pipeline is an integrated workflow that performs:

  1. Subfinder - Passive subdomain discovery from 50+ sources
  2. DNSx - Fast DNS resolution and validation
  3. HTTPx - HTTP/HTTPS probing and technology detection
  4. Nuclei - Automated vulnerability scanning (optional)

Workflow:

Domain → Subfinder → DNSx → HTTPx → Nuclei → MongoDB
         (Discover)  (Resolve) (Probe) (Scan)  (Store)

Prerequisites for Subfinder Pipeline

Before installing the tools, ensure you have:

  • Go 1.19+ installed (Download Go)
  • Git for cloning repositories
  • Linux/macOS/WSL (recommended) or Windows with PowerShell
  • Administrator/sudo access for system-wide installation

Step 1: Install Go (if not installed)

Linux:

# Download and install Go
wget https://go.dev/dl/go1.21.5.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.21.5.linux-amd64.tar.gz

# Add to PATH (add to ~/.bashrc or ~/.zshrc)
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

# Reload shell
source ~/.bashrc  # or source ~/.zshrc

macOS:

# Using Homebrew
brew install go

# Or download from https://golang.org/dl/

Windows:

# Download installer from https://golang.org/dl/
# Run the .msi installer
# Go will be automatically added to PATH

# Verify installation
go version

Verify Go installation:

go version
# Should output: go version go1.21.5 linux/amd64 (or similar)

Step 2: Install ProjectDiscovery Tools

🚀 Quick Installation for Docker/Linux Users: We provide an automated script that downloads all required tools! Skip to Method 3: Automated Script if you're using Docker or Linux.

Method 1: Using Go (Recommended for Development)

This method installs the latest versions and is best for local development.

A. Install Subfinder
# Install latest version
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest

# Verify installation
subfinder -version
B. Install DNSx
# Install latest version
go install -v github.com/projectdiscovery/dnsx/cmd/dnsx@latest

# Verify installation
dnsx -version
C. Install HTTPx
# Install latest version
go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest

# Verify installation
httpx -version
D. Install Nuclei
# Install latest version
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest

# Verify installation
nuclei -version

# Update nuclei templates (important!)
nuclei -update-templates

Method 2: Download Individual Binaries

Download pre-compiled binaries manually if you don't have Go installed.

A. Install Subfinder

Method 1: Using Go (Recommended)

# Install latest version
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest

# Verify installation
subfinder -version

Method 2: Download Binary

# Linux
wget https://github.com/projectdiscovery/subfinder/releases/latest/download/subfinder_2.6.3_linux_amd64.zip
unzip subfinder_2.6.3_linux_amd64.zip
sudo mv subfinder /usr/local/bin/
sudo chmod +x /usr/local/bin/subfinder

# macOS
wget https://github.com/projectdiscovery/subfinder/releases/latest/download/subfinder_2.6.3_macOS_amd64.zip
unzip subfinder_2.6.3_macOS_amd64.zip
sudo mv subfinder /usr/local/bin/
sudo chmod +x /usr/local/bin/subfinder

Configure Subfinder API Keys (Optional but Recommended):

Subfinder works better with API keys for passive sources:

# Create config directory
mkdir -p ~/.config/subfinder

# Create provider-config.yaml
nano ~/.config/subfinder/provider-config.yaml

Add API keys (get free keys from respective providers):

binaryedge:
  - your_binaryedge_api_key_here
censys:
  - your_censys_api_id:your_censys_api_secret
shodan:
  - your_shodan_api_key_here
github:
  - ghp_your_github_personal_access_token_here
virustotal:
  - your_virustotal_api_key_here

Test Subfinder:

subfinder -d example.com -silent
# Should return subdomains of example.com

B. Install DNSx

Method 1: Using Go (Recommended)

# Install latest version
go install -v github.com/projectdiscovery/dnsx/cmd/dnsx@latest

# Verify installation
dnsx -version

Method 2: Download Binary

# Linux
wget https://github.com/projectdiscovery/dnsx/releases/latest/download/dnsx_1.1.6_linux_amd64.zip
unzip dnsx_1.1.6_linux_amd64.zip
sudo mv dnsx /usr/local/bin/
sudo chmod +x /usr/local/bin/dnsx

# macOS
wget https://github.com/projectdiscovery/dnsx/releases/latest/download/dnsx_1.1.6_macOS_amd64.zip
unzip dnsx_1.1.6_macOS_amd64.zip
sudo mv dnsx /usr/local/bin/
sudo chmod +x /usr/local/bin/dnsx

Test DNSx:

echo "example.com" | dnsx -silent
# Should resolve example.com and show IP addresses
C. Install HTTPx (Continued from Method 2)
# Linux
wget https://github.com/projectdiscovery/httpx/releases/latest/download/httpx_1.3.7_linux_amd64.zip
unzip httpx_1.3.7_linux_amd64.zip
sudo mv httpx /usr/local/bin/
sudo chmod +x /usr/local/bin/httpx

# macOS
wget https://github.com/projectdiscovery/httpx/releases/latest/download/httpx_1.3.7_macOS_amd64.zip
unzip httpx_1.3.7_macOS_amd64.zip
sudo mv httpx /usr/local/bin/
sudo chmod +x /usr/local/bin/httpx

Test HTTPx:

echo "https://example.com" | httpx -silent
# Should probe example.com and return HTTP details
D. Install Nuclei (Optional but Recommended)
# Linux
wget https://github.com/projectdiscovery/nuclei/releases/latest/download/nuclei_3.1.5_linux_amd64.zip
unzip nuclei_3.1.5_linux_amd64.zip
sudo mv nuclei /usr/local/bin/
sudo chmod +x /usr/local/bin/nuclei

# Update templates
nuclei -update-templates

# macOS
wget https://github.com/projectdiscovery/nuclei/releases/latest/download/nuclei_3.1.5_macOS_amd64.zip
unzip nuclei_3.1.5_macOS_amd64.zip
sudo mv nuclei /usr/local/bin/
sudo chmod +x /usr/local/bin/nuclei
nuclei -update-templates

Test Nuclei:

echo "https://example.com" | nuclei -silent
# Should scan example.com with nuclei templates

Method 3: Automated Script (EASIEST for Docker/Linux)

🎯 Best for: Docker deployments, Linux servers, first-time setup

We provide a bash script that automatically downloads all ProjectDiscovery tools (DNSx, HTTPx, Nuclei) as Linux binaries. This is the fastest and easiest method!

For Docker Users:

The script is automatically included in your Docker container. Tools are downloaded when you build the container.

No action needed! Just run:

docker-compose up --build

The Dockerfile will execute download_tools.sh during build, and all tools will be available at /app/tools/.

For Linux/WSL Users (Manual Installation):

If you're not using Docker, you can run the script manually:

Step 1: Navigate to Backend directory

cd Vulnerability_Scanner/Backend

Step 2: Make script executable

chmod +x download_tools.sh

Step 3: Create tools directory

mkdir -p tools

Step 4: Run the download script

./download_tools.sh

What the script does:

  1. ✅ Downloads DNSx (latest stable version)
  2. ✅ Downloads HTTPx (latest stable version)
  3. ✅ Downloads Nuclei (latest stable version)
  4. ✅ Extracts binaries to Backend/tools/ directory
  5. ✅ Sets executable permissions automatically
  6. ✅ Verifies installation

Script Output:

Downloading Linux binaries for scanning tools...
Downloading dnsx...
✓ dnsx installed
Downloading httpx...
✓ httpx installed
Downloading nuclei...
✓ nuclei installed

All tools installed successfully!
-rwxr-xr-x 1 root root 12M Jan 12 14:30 dnsx
-rwxr-xr-x 1 root root 15M Jan 12 14:30 httpx
-rwxr-xr-x 1 root root 45M Jan 12 14:30 nuclei

Step 5: Verify tools are working

# Test DNSx
./tools/dnsx -version

# Test HTTPx
./tools/httpx -version

# Test Nuclei
./tools/nuclei -version

# Update Nuclei templates
./tools/nuclei -update-templates

Step 6: Add to Backend/.env

# Add this line to your .env file
echo 'TOOLS_DIR=/app/tools' >> .env
Script Contents (Backend/download_tools.sh):
#!/bin/bash
# Download Linux binaries for ProjectDiscovery tools

echo "Downloading Linux binaries for scanning tools..."

cd /app/tools || exit 1

# Download dnsx
echo "Downloading dnsx..."
curl -L https://github.com/projectdiscovery/dnsx/releases/download/v1.2.3/dnsx_1.2.3_linux_amd64.zip -o dnsx.zip
unzip -o dnsx.zip
chmod +x dnsx
rm dnsx.zip
echo "✓ dnsx installed"

# Download httpx  
echo "Downloading httpx..."
curl -L https://github.com/projectdiscovery/httpx/releases/download/v1.3.7/httpx_1.3.7_linux_amd64.zip -o httpx.zip
unzip -o httpx.zip
chmod +x httpx
rm httpx.zip
echo "✓ httpx installed"

# Download nuclei
echo "Downloading nuclei..."
curl -L https://github.com/projectdiscovery/nuclei/releases/download/v3.1.5/nuclei_3.1.5_linux_amd64.zip -o nuclei.zip
unzip -o nuclei.zip
chmod +x nuclei
rm nuclei.zip
echo "✓ nuclei installed"

echo ""
echo "All tools installed successfully!"
ls -lh /app/tools/
Troubleshooting the Script:

Problem: "Permission denied" when running script

# Solution: Make it executable
chmod +x download_tools.sh
./download_tools.sh

Problem: "curl: command not found"

# Solution: Install curl
# Ubuntu/Debian:
sudo apt update && sudo apt install curl unzip

# CentOS/RHEL:
sudo yum install curl unzip

Problem: "tools directory not found"

# Solution: Create the directory
mkdir -p Backend/tools
cd Backend
./download_tools.sh

Problem: "Failed to download"

# Solution: Check internet connection and try again
# Or download manually:
cd Backend/tools

# Download DNSx manually
wget https://github.com/projectdiscovery/dnsx/releases/download/v1.2.3/dnsx_1.2.3_linux_amd64.zip
unzip dnsx_1.2.3_linux_amd64.zip
chmod +x dnsx

# Repeat for httpx and nuclei
Updating Tools (Using Script):

To update to the latest versions, simply edit the script to change version numbers and re-run:

  1. Open download_tools.sh
  2. Update version numbers in URLs (check GitHub releases)
  3. Run: ./download_tools.sh
  4. Tools will be re-downloaded with latest versions

Comparison of Installation Methods

Method Best For Pros Cons Time
Method 1: Go Install Local development, frequent updates Latest versions, easy updates via go install Requires Go, slower first install ~5 min
Method 2: Manual Binary Specific version control, no Go No Go required, specific versions Manual updates, platform-specific ~10 min
Method 3: Automated Script Docker, Linux servers, quick setup Fastest, no Go needed, automated Linux only, fixed versions in script ~2 min

Recommendation:

  • 🐳 Docker users: Method 3 (automatic during build)
  • 🐧 Linux/WSL users: Method 3 (run script manually)
  • 💻 Local development: Method 1 (go install for easy updates)
  • 🍎 macOS users: Method 1 or 2 (script is Linux-only)

Configure Subfinder API Keys (Optional but Recommended)

Subfinder works better with API keys for passive sources:

# Create config directory
mkdir -p ~/.config/subfinder

# Create provider-config.yaml
nano ~/.config/subfinder/provider-config.yaml

Add API keys (get free keys from respective providers):

binaryedge:
  - your_binaryedge_api_key_here
censys:
  - your_censys_api_id:your_censys_api_secret
shodan:
  - your_shodan_api_key_here
github:
  - ghp_your_github_personal_access_token_here
virustotal:
  - your_virustotal_api_key_here

Note: Subfinder still needs to be installed separately using Method 1 or 2 above, as it's not included in the automated script.


Installation Summary

After using any of the above methods, you should have:

  • Subfinder - For subdomain discovery
  • DNSx - For DNS resolution
  • HTTPx - For HTTP probing
  • Nuclei - For vulnerability scanning

Quick verification:

# Check all tools are installed
which subfinder && which dnsx && which httpx && which nuclei

# Or if using script method:
ls Backend/tools/
# Should show: dnsx, httpx, nuclei

# Test each tool
subfinder -version
dnsx -version
httpx -version
nuclei -version

Step 3: Verify Installation

Run this comprehensive test to ensure everything works:

# Test Subfinder
echo "Testing Subfinder..."
subfinder -d example.com -silent | head -n 5

# Test DNSx
echo "Testing DNSx..."
echo "example.com" | dnsx -silent -a -resp

# Test HTTPx
echo "Testing HTTPx..."
echo "https://example.com" | httpx -silent -title -tech-detect

# Test Nuclei (if installed)
echo "Testing Nuclei..."
echo "https://example.com" | nuclei -silent -tags cve

# Test full pipeline
echo "Testing Full Pipeline..."
subfinder -d example.com -silent | dnsx -silent -a | httpx -silent -title

If all commands work without errors, your setup is complete!


Step 4: Configure Backend for Subfinder

The application expects tools to be in your PATH or in a specified TOOLS_DIR.

Option 1: System PATH (Recommended)

Tools are already in PATH if installed via go install:

which subfinder
which dnsx
which httpx
which nuclei

Option 2: Custom Tools Directory

If you prefer a custom directory:

# Create tools directory
mkdir -p ~/security-tools/bin

# Move or symlink tools
ln -s $(which subfinder) ~/security-tools/bin/
ln -s $(which dnsx) ~/security-tools/bin/
ln -s $(which httpx) ~/security-tools/bin/
ln -s $(which nuclei) ~/security-tools/bin/

# Add to .env file
echo 'TOOLS_DIR=/home/yourusername/security-tools/bin' >> Backend/.env

Step 4: Verify Complete Installation

Run this comprehensive test:

# Test Subfinder
echo "Testing Subfinder..."
subfinder -d example.com -silent | head -n 5

# Test DNSx
echo "Testing DNSx..."
echo "example.com" | dnsx -silent -a -resp

# Test HTTPx
echo "Testing HTTPx..."
echo "https://example.com" | httpx -silent -title -tech-detect

# Test Nuclei (if installed)
echo "Testing Nuclei..."
echo "https://example.com" | nuclei -silent -tags cve

# Test full pipeline
echo "Testing Full Pipeline..."
subfinder -d example.com -silent | dnsx -silent -a | httpx -silent -title

If all commands work without errors, your setup is complete!


Step 5: Configure Backend for Subfinder

Ensure your Backend/.env file has the correct configuration:

# Tools Directory (if not in PATH)
TOOLS_DIR=/usr/local/bin

# Or for custom directory
# TOOLS_DIR=/home/yourusername/security-tools/bin

# Subfinder Config (optional - uncomment if you created provider-config.yaml)
# SUBFINDER_CONFIG=/home/yourusername/.config/subfinder/provider-config.yaml

How to Use Subfinder Pipeline

Via Web Interface (Easiest Method):

  1. Navigate to the scanner: http://localhost:3000
  2. Login or create an account
  3. Select "Subfinder" from the tool dropdown menu
  4. Enter target domain: e.g., example.com
  5. Click "Start Scan"
  6. Watch real-time results stream in the terminal widget
  7. View results in the dashboard after scan completes

Via Backend API:

# Get your JWT token first by logging in

# Start Subfinder scan
curl -X GET "http://localhost:5000/rescan/stream_subfinder_dnsx_httpx?domain=example.com" \
  -H "Cookie: access_token_cookie=YOUR_JWT_TOKEN" \
  --no-buffer

Via Command Line (Direct - For Testing):

# Full pipeline manually
subfinder -d example.com -silent | \
  dnsx -silent -a -resp | \
  httpx -silent -title -tech-detect -status-code | \
  tee results.txt

Troubleshooting Subfinder Pipeline

Problem: "subfinder: command not found"

Solution:

# Check if Go bin is in PATH
echo $PATH | grep go

# If not, add to ~/.bashrc or ~/.zshrc:
export PATH=$PATH:$HOME/go/bin

# Reload shell
source ~/.bashrc

# Or reinstall
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest

Problem: "Permission denied"

Solution:

# Make tools executable
chmod +x $(which subfinder)
chmod +x $(which dnsx)
chmod +x $(which httpx)
chmod +x $(which nuclei)

Problem: "No results from subfinder"

Possible Causes & Solutions:

  • No API keys configured: Add keys to ~/.config/subfinder/provider-config.yaml
  • Internet connection issues: Check your network connection
  • Domain has few subdomains: Some domains genuinely have very few subdomains
  • Rate limiting: Try again after some time

Debug Mode:

# Run with verbose output to see what's happening
subfinder -d example.com -v

Problem: "HTTPx timeout errors"

Solution:

# Increase timeout and retries
httpx -silent -timeout 10 -retry 2 -rate-limit 50

# Or edit Backend/utils/subfinder_runner.py to add these flags

Problem: "Nuclei templates outdated or missing"

Solution:

# Update templates regularly
nuclei -update-templates

# Force update if above doesn't work
nuclei -update-templates -ut

# Check template directory
ls ~/.nuclei-templates/

Problem: "Pipeline error" or "Tool not found" in web interface

Solution:

  1. Check tool installation:

    which subfinder && which dnsx && which httpx
  2. Check Backend logs:

    docker logs flask-backend
    # Or if running manually, check terminal output
  3. Verify .env configuration:

    cat Backend/.env | grep TOOLS_DIR
  4. Restart backend:

    docker-compose restart backend
    # Or Ctrl+C and restart python main.py

Advanced Pipeline Configuration

Custom Subfinder Command

Edit Backend/utils/subfinder_runner.py to customize subfinder behavior:

# Example customizations:
cmd = [
    'subfinder',
    '-d', domain,
    '-all',  # Use all sources (slower but more comprehensive)
    '-recursive',  # Find recursive subdomains
    '-timeout', '30',  # Increase timeout
    '-o', output_file
]

Custom DNSx Options

# In Backend/utils/subfinder_runner.py
dnsx_cmd = [
    'dnsx',
    '-silent',
    '-a',  # Get A records
    '-aaaa',  # Get AAAA records
    '-cname',  # Get CNAME records
    '-mx',  # Get MX records
    '-resp',  # Show response
    '-retry', '3',
    '-rate-limit', '100'  # Limit DNS queries per second
]

Custom HTTPx Options

# In Backend/utils/subfinder_runner.py
httpx_cmd = [
    'httpx',
    '-silent',
    '-title',  # Get page title
    '-tech-detect',  # Detect technologies
    '-status-code',  # Show HTTP status
    '-content-length',  # Show content length
    '-follow-redirects',  # Follow redirects
    '-timeout', '10',
    '-threads', '50'  # Parallel threads
]

Performance Optimization

For Large Scans (1000+ subdomains):

# Use rate limiting to avoid getting blocked
subfinder -d example.com -rate-limit 50

# Use multiple threads for HTTPx
httpx -threads 50 -rate-limit 100

# Use custom resolvers for DNSx
dnsx -resolver 8.8.8.8,1.1.1.1,8.8.4.4 -rate-limit 100

For Fast Scans (Quick Overview):

# Disable some checks
httpx -silent -title -status-code
# (Skip tech detection for speed)

# Use fewer nuclei templates
nuclei -tags cve,exposure -exclude low,info

What Each Tool Does

Subfinder

  • Searches 50+ passive sources for subdomains
  • Sources include:
    • Certificate Transparency logs (crt.sh, Censys)
    • DNS aggregators (VirusTotal, SecurityTrails, AlienVault)
    • Search engines (Google, Bing, Yahoo)
    • APIs (Shodan, GitHub, etc.)
  • Output: List of potential subdomains

DNSx

  • Validates subdomains by resolving DNS
  • Filters out:
    • Non-existent domains
    • Dead/inactive domains
    • Wildcard responses
  • Output: Live subdomains with IP addresses

HTTPx

  • Probes for web servers on HTTP/HTTPS
  • Detects:
    • Live web services
    • HTTP vs HTTPS
    • Technologies used (frameworks, CMS, libraries)
    • Server headers
    • Page titles
  • Output: Web-enabled subdomains with metadata

Nuclei (Optional)

  • Scans for vulnerabilities using 7000+ templates
  • Checks for:
    • Known CVEs
    • Misconfigurations
    • Exposed services (databases, admin panels)
    • Security headers missing
  • Output: Security findings with severity ratings

File Locations Reference

Component Config Location Description
Subfinder ~/.config/subfinder/provider-config.yaml API keys for data sources
Nuclei ~/.nuclei-templates/ Vulnerability templates
DNSx No config needed Uses system DNS by default
HTTPx No config needed Uses default options
Backend Code Backend/utils/subfinder_runner.py Pipeline orchestration
Backend Config Backend/.env Tool paths and settings

Pipeline Flow Diagram

┌─────────────────────────────────────────────────────────────────┐
│                       User Input                                │
│                    (example.com)                                │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│  Step 1: Subfinder - Passive Subdomain Discovery                │
│  • Queries 50+ sources (crt.sh, VirusTotal, Shodan, etc.)      │
│  • Output: api.example.com, blog.example.com, dev.example.com  │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│  Step 2: DNSx - DNS Resolution & Validation                     │
│  • Resolves each subdomain                                      │
│  • Filters out dead/invalid entries                             │
│  • Output: api.example.com → 192.168.1.10                      │
│            blog.example.com → 192.168.1.20                      │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│  Step 3: HTTPx - HTTP/HTTPS Probing                            │
│  • Tests for web servers on each subdomain                      │
│  • Detects technologies (WordPress, React, Nginx, etc.)         │
│  • Gets titles, status codes, headers                           │
│  • Output: api.example.com [200] [Nginx, Node.js]              │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│  Step 4: Nuclei - Vulnerability Scanning (Optional)             │
│  • Runs 7000+ security templates                                │
│  • Checks for CVEs, misconfigurations, exposures               │
│  • Output: [CVE-2023-1234] Detected on api.example.com         │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│  Step 5: MongoDB Storage                                        │
│  • Structured JSON storage                                      │
│  • Searchable and filterable                                    │
│  • Historical tracking                                          │
│  • PDF report generation                                        │
└─────────────────────────────────────────────────────────────────┘

Example Output

Subfinder Output:

api.example.com
blog.example.com
dev.example.com
admin.example.com
mail.example.com

DNSx Output:

api.example.com [192.168.1.10]
blog.example.com [192.168.1.20]
dev.example.com [192.168.1.30]

HTTPx Output:

https://api.example.com [200] [API Gateway] [Nginx, Node.js]
https://blog.example.com [200] [Example Blog] [WordPress, PHP]
https://dev.example.com [401] [Dev Server] [React, Nginx]

Final MongoDB Document:

{
  "subdomain": "api.example.com",
  "ip": "192.168.1.10",
  "http_status": 200,
  "http_title": "API Gateway",
  "technologies": ["Nginx", "Node.js", "Express"],
  "ssl": true,
  "ssl_issuer": "Let's Encrypt",
  "ports": [80, 443],
  "timestamp": "2026-01-12T19:54:00Z",
  "nuclei_findings": [
    {
      "template": "CVE-2023-1234",
      "severity": "high",
      "description": "Example vulnerability found"
    }
  ]
}

🛡️ OWASP ZAP Setup

OWASP ZAP (Zed Attack Proxy) is a powerful vulnerability scanner integrated into this platform.

Installation Options

Option 1: Docker (Recommended)

Add ZAP service to your docker-compose.yml:

services:
  zap:
    image: owasp/zap2docker-stable
    command: zap.sh -daemon -host 0.0.0.0 -port 8080 -config api.key=your_api_key
    ports:
      - "8080:8080"
    networks:
      - scanner-network

Then:

docker-compose up zap -d

Option 2: Manual Installation

Windows:

  1. Download installer from https://www.zaproxy.org/download/
  2. Run the installer
  3. Add ZAP to system PATH
  4. Launch ZAP: zap.sh -daemon -host localhost -port 8080

Linux:

# Using apt (Debian/Ubuntu)
sudo apt update
sudo apt install zaproxy

# Using snap
sudo snap install zaproxy --classic

# From source
wget https://github.com/zaproxy/zaproxy/releases/download/v2.14.0/ZAP_2.14.0_Linux.tar.gz
tar -xvf ZAP_2.14.0_Linux.tar.gz
cd ZAP_2.14.0
./zap.sh -daemon -host localhost -port 8080

macOS:

# Using Homebrew
brew install --cask owasp-zap

# Or download from website
# https://www.zaproxy.org/download/

ZAP Configuration

Starting ZAP in Daemon Mode

For automated scanning, start ZAP in headless daemon mode:

# Windows
zap.bat -daemon -host localhost -port 8080 -config api.key=your_api_key

# Linux/macOS
zap.sh -daemon -host localhost -port 8080 -config api.key=your_api_key

Configuring ZAP API

  1. Launch ZAP GUI (for initial setup)

    zap.sh  # or zap.bat on Windows
  2. Enable API:

    • Go to ToolsOptionsAPI
    • Check "Enable API"
    • Set API Key or use auto-generated one
    • Note: Save the key immediately!
  3. Configure Network Settings:

    • Go to ToolsOptionsLocal Proxies
    • Ensure Address: localhost (or 0.0.0.0 for Docker)
    • Port: 8080 (default)
  4. Security Settings:

    • Go to ToolsOptionsAPI
    • Uncheck "Disable the API key" (keep it enabled for security)
    • Add permitted addresses if needed

ZAP Scan Policies

The scanner uses custom policies for different scan types:

  • Spider Scan: Crawls target to map all pages
  • Active Scan: Tests for vulnerabilities (can be intrusive)
  • Passive Scan: Analyzes traffic without active probing

Configure scan intensity in ZAP:

Tools → Options → Active Scan → Policy

Testing ZAP Connection

Verify ZAP is running and accessible:

# Using curl
curl "http://localhost:8080/JSON/core/view/version/?apikey=your_api_key"

# Using Python
python -c "import requests; print(requests.get('http://localhost:8080/JSON/core/view/version/?apikey=your_api_key').json())"

Expected response:

{
  "version": "2.14.0"
}

Troubleshooting ZAP

Problem: ZAP not starting

  • Solution: Check if port 8080 is already in use
    # Windows
    netstat -ano | findstr :8080
    
    # Linux/macOS
    lsof -i :8080

Problem: API key errors

  • Solution: Ensure API key in .env matches ZAP configuration
  • Verify ZAP API is enabled in settings

Problem: Connection refused

  • Solution: Check ZAP is running in daemon mode
  • Verify firewall isn't blocking port 8080

Problem: Scans taking too long

  • Solution: Adjust scan policy to "Low" intensity
  • Reduce thread count in Active Scan settings
  • Use smaller scope for testing

Disabling ZAP

If you don't want to use ZAP scanning:

# In your .env file
ZAP_ENABLED=false

The scanner will skip all ZAP-related vulnerability assessments.


💻 Running the Application

Using Docker (Production-Ready)

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f

# Stop all services
docker-compose down

# Rebuild after code changes
docker-compose up --build

Manual Execution (Development)

You need 5 terminal windows:

Terminal 1 - MongoDB:

mongod --dbpath=/path/to/data/db
# Or if running as service: sudo systemctl start mongod

Terminal 2 - Redis:

redis-server
# Or if running as service: sudo systemctl start redis

Terminal 3 - Backend:

cd Backend
source venv/bin/activate  # or venv\Scripts\activate on Windows
python main.py

Terminal 4 - Celery Worker:

cd Backend
source venv/bin/activate
celery -A celery_worker.celery_app worker --loglevel=info --pool=solo

Terminal 5 - Frontend:

cd Frontend
npm run dev

Optional - Celery Beat (for scheduled scans):

cd Backend
source venv/bin/activate
celery -A celery_worker.celery_app beat --loglevel=info

Optional - OWASP ZAP:

zap.sh -daemon -host localhost -port 8080 -config api.key=your_api_key

Access Points


🎮 Features in Detail

1. User Authentication & Registration

📝 Registration Requirements

To successfully register on the platform, you need to provide:

Mandatory Fields:

  • Email - Valid email address (used for login and OTP verification)
  • Password - Secure password (minimum 8 characters recommended)
  • Confirm Password - Must match the password field

Optional Fields (for advanced features - currently commented out in code):

  • 🔑 Shodan API Key - For enhanced network scanning capabilities
  • 🔑 FOFA API Key - For additional reconnaissance data
  • 📁 Subdomains File - Custom subdomain wordlist (optional upload)
  • 📁 Endpoints File - Known endpoints for fuzzing (optional upload)
  • 📁 IPs File - Target IP addresses (optional upload)
  • 📁 Naming Rules File - Custom naming conventions (optional upload)

Note: Currently, only email and password are required for registration. The optional API keys and file uploads are prepared for future features but not yet active in the registration flow.

🔐 Registration Flow

Registration Flow:

  1. Navigate to http://localhost:3000/register
  2. Enter required fields:
    • Valid email address
    • Strong password
    • Confirm password (must match)
  3. Click "Register" button
  4. System automatically generates:
    • organization field (extracted from email domain, e.g., @gmail.comgmail.com)
    • name field (extracted from email prefix, e.g., user@domain.comuser)
  5. Email verification:
    • Check your email for 6-digit OTP (One-Time Password)
    • OTP expires in 10 minutes
    • If SMTP not configured, auto-verification happens (check logs)
  6. Enter OTP on verification page
  7. Account activated! You can now login

Important Notes:

  • ⚠️ Email must be unique - Cannot register with an already registered email
  • 🔒 Password is hashed - Stored securely using bcrypt
  • 📧 Email verification required - Account won't be fully active until OTP verification
  • OTP expires in 10 minutes - Request a new OTP if expired

Login:

  1. Go to http://localhost:3000/login
  2. Enter credentials
  3. Access dashboard

Password Reset:

  • Click "Forgot Password" on login page
  • Enter email
  • Check email for reset link
  • Set new password

2. Subdomain Scanning

Using Knockpy Scanner:

  1. Log in to dashboard
  2. Click "New Scan" → "Knockpy Scan"
  3. Enter target domain (e.g., example.com)
  4. Click "Start Scan"
  5. Monitor real-time progress
  6. View results when complete

Using Subfinder Scanner:

  1. Navigate to "Subfinder Scan"
  2. Enter target domain
  3. Configure scan options:
    • Passive only (fast, safe)
    • Include active enumeration (slower, more results)
    • Enable certificate transparency lookup
  4. Start scan
  5. Download results as JSON/PDF

Scan Results Include:

  • All discovered subdomains
  • IP addresses and geolocation
  • HTTP status codes
  • Open ports and services
  • Technology stack
  • SSL/TLS certificates
  • DNS records
  • Security headers
  • Vulnerability findings (if ZAP enabled)

3. Vulnerability Assessment

Automated ZAP Scanning: When ZAP is enabled, each subdomain automatically goes through:

  1. Spider Scan - Crawls all pages and resources
  2. Passive Scan - Analyzes traffic for vulnerabilities
  3. Active Scan - Tests for security issues (optional)

Results categorized by:

  • 🔴 Critical (CVSS 9.0-10.0)
  • 🟠 High (CVSS 7.0-8.9)
  • 🟡 Medium (CVSS 4.0-6.9)
  • 🔵 Low (CVSS 0.1-3.9)
  • ⚪ Informational

4. AI-Powered CVE Lookup

Cohere Integration:

  • Automatic CVE analysis for discovered vulnerabilities
  • Natural language explanations of security issues
  • Remediation recommendations
  • Risk assessment and prioritization
  • Exploitability scoring

Usage:

  1. Complete a vulnerability scan
  2. Click on any vulnerability finding
  3. View AI-generated analysis
  4. Get remediation steps
  5. Export findings to PDF

5. Statistics Dashboard

Analytics Available:

Scan History:

  • Total scans performed
  • Success/failure rate
  • Average scan duration
  • Most scanned domains

Subdomain Trends:

  • Subdomain count over time
  • New vs. existing subdomains
  • Subdomain growth rate
  • Geographic distribution

Vulnerability Metrics:

  • Total vulnerabilities found
  • Severity distribution
  • Most common vulnerabilities
  • Remediation progress tracking

Service Analysis:

  • Open ports distribution
  • Running services detected
  • Technology stack analysis
  • Certificate expiration tracking

Access Dashboard:

http://localhost:3000/dashboard/statistics

6. PDF Report Generation

Generate Reports:

  1. Navigate to scan results
  2. Click "Download PDF Report"
  3. Report includes:
    • Executive Summary
    • Scan metadata (date, duration, target)
    • Subdomain inventory
    • Port and service details
    • Technology fingerprints
    • Vulnerability findings
    • CVE details with CVSS scores
    • Remediation recommendations
    • Appendices and references

Report Customization:

  • Include/exclude sections
  • Filter by severity
  • Add custom notes
  • Branding options

7. Real-Time Scan Monitoring

Live Progress Tracking:

  • WebSocket-based updates
  • Progress percentage
  • Current task being executed
  • Subdomains discovered count
  • Vulnerabilities found count
  • Estimated time remaining

Monitor Active Scans:

http://localhost:3000/dashboard/active-scans

8. Asset Management

Organize Your Targets:

  • Create asset groups
  • Tag domains by category
  • Set scan schedules
  • Configure scan preferences per asset
  • Track asset ownership

Asset Features:

  • Bulk import/export
  • Custom metadata
  • Scan history per asset
  • Notification preferences
  • Access control

9. Email Notifications

Configurable Alerts:

  • Scan completion
  • Critical vulnerabilities found
  • Scan failures
  • Certificate expiration warnings
  • Scheduled scan reminders

Configure Notifications:

Dashboard → Settings → Notifications

10. API Access

RESTful API Endpoints:

# Authentication
POST /api/auth/register
POST /api/auth/login
POST /api/auth/verify-otp

# Scanning
POST /api/scan/knockpy
POST /api/scan/subfinder
GET /api/scan/results/:scan_id

# Assets
GET /api/assets
POST /api/assets
PUT /api/assets/:id
DELETE /api/assets/:id

# Statistics
GET /api/stats/overview
GET /api/stats/trends
GET /api/stats/vulnerabilities

# Reports
GET /api/reports/:scan_id/pdf
POST /api/reports/generate

API Documentation:

http://localhost:5000/api/docs

🤖 AI-Powered Recursive Recon

The AI-driven investigation loop leverages Cohere's LLM to make reconnaissance intelligent and adaptive.

How It Works

graph TB
    A[Initial Reconnaissance] --> B[Data Collection]
    B --> C[Cohere AI Analysis]
    C --> D[Generate Insights]
    D --> E[Suggest Next Steps]
    E --> F[Execute Safe Commands]
    F --> G[Collect Results]
    G --> C
    G --> H[Generate Report]
Loading

Features

  1. Intelligent CVE Lookup

    • Automatic vulnerability database queries
    • Natural language explanations
    • Severity assessment
    • Remediation recommendations
  2. Context-Aware Analysis

    • Analyzes technology stack
    • Identifies potential attack vectors
    • Suggests targeted scans
    • Prioritizes findings
  3. Adaptive Scanning

    • Learns from previous results
    • Adjusts scan intensity
    • Focuses on high-value targets
    • Reduces false positives
  4. Safety Controls

    • Rate limiting (10 requests/minute default)
    • Scope validation (wildcards supported)
    • Command whitelist
    • Human approval gates
    • Audit logging

Configuration

# Backend/scanner/config.py
AI_RECON_CONFIG = {
    "enabled": True,
    "provider": "cohere",
    "model": "command",
    "max_iterations": 5,
    "rate_limit": "10/minute",
    "scope": ["*.target.com"],
    "require_approval": False,
    "sandbox": True
}

Safety Recommendations

⚠️ CRITICAL: Always run AI-powered scanning with:

  • Network isolation (Docker containers)
  • Strict scope definitions
  • Rate limiting enabled
  • Command whitelisting
  • Comprehensive logging
  • Kill-switch mechanism
  • Human oversight for production

🐛 Troubleshooting

Recently Fixed Issues (January 2026) ✅

FIXED: Asset Management Black Screen

Problem: When saving assets at /assets, page showed black screen.

Root Cause:

  1. Incorrect toast notification syntax (toast({type, message}) instead of toast.success())
  2. Authentication not using cookies
  3. Missing null/undefined checks for legacy data

Solution Applied:

  • Fixed toast syntax to use toast.success() and toast.error()
  • Added credentials: 'include' to all fetch calls
  • Implemented data normalization: companyName || company_name || 'Unknown Company'
  • Added array safety checks: domains || [], ipAddresses || ip_addresses || []

Files Modified:

  • Frontend/src/components/asset/AssetForm.tsx (627 lines)

FIXED: "No authentication token found" despite being logged in

Problem: App was searching for tokens in localStorage that don't exist.

Root Cause: Backend uses HTTP-only cookies, but frontend was checking localStorage.

Solution Applied:

  • Removed all localStorage token logic
  • Added credentials: 'include' to all API calls
  • Browser now automatically handles cookie authentication

Benefits:

  • More secure (XSS-resistant)
  • No manual token management
  • Automatic cookie refresh

FIXED: Security Tools Not Found (nikto, testssl.sh, sqlmap, ffuf)

Problem: Multiple tools showing "Tool not found" or "Unknown tool" errors.

Root Cause: Tools not installed in Docker container.

Solution Applied:

Dockerfile.backend additions:

# Install from GitHub
RUN git clone --depth 1 https://github.com/sullo/nikto /opt/nikto && \
    ln -s /opt/nikto/program/nikto.pl /usr/local/bin/nikto

RUN git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git /opt/sqlmap && \
    ln -s /opt/sqlmap/sqlmap.py /usr/local/bin/sqlmap

RUN git clone --depth 1 https://github.com/drwetter/testssl.sh.git /opt/testssl.sh && \
    ln -s /opt/testssl.sh/testssl.sh /usr/local/bin/testssl.sh && \
    ln -s /opt/testssl.sh/testssl.sh /usr/local/bin/testssl

# Install FFUF binary
RUN curl -L https://github.com/ffuf/ffuf/releases/download/v2.1.0/ffuf_2.1.0_linux_amd64.tar.gz | \
    tar -xzf - -C /usr/local/bin

New Files Created:

  • Backend/utils/automated_tools/sqlmap.py (69 lines)

Verification:

docker exec flask-backend nikto -Version
docker exec flask-backend sqlmap --version
docker exec flask-backend ffuf -V
docker exec flask-backend testssl.sh --version

FIXED: Nmap "[Errno 2] No such file or directory: 'wsl'"

Problem: Nmap trying to execute WSL commands inside Docker container.

Root Cause: Hardcoded command execution without environment detection.

Solution Applied:

Created environment detection in Backend/nmap.py:

def is_docker():
    """Detect if running in Docker container"""
    if os.path.exists('/.dockerenv'):
        return True
    try:
        with open('/proc/1/cgroup', 'r') as f:
            return 'docker' in f.read()
    except:
        return False

# Conditional command building
if is_docker():
    command = ["nmap", "-Pn", target]  # Direct execution
elif os.name == 'nt':
    command = ["wsl", "-d", "Ubuntu-22.04", "nmap", "-Pn", target]  # Windows
else:
    command = ["nmap", "-Pn", target]  # Linux

Files Modified:

  • Backend/nmap.py
  • Backend/ffuf.py
  • Backend/utils/automated_tools/wpscan.py
  • Backend/utils/automated_tools/env_helper.py (NEW - 45 lines)

FIXED: Nmap Timeout After 60 Seconds

Problem: Nmap scans timing out for large networks.

Solution Applied:

  • Increased timeout from 60s to 300s (5 minutes)
  • Added debug logging for timeout tracking
# In Backend/nmap.py
result = subprocess.run(
    command, 
    capture_output=True, 
    text=True, 
    timeout=300  # Changed from 60
)

FIXED: Manual Container Restart Required After Code Changes

Problem: Had to run docker-compose restart after every code change.

Solution Applied:

Dockerfile.backend:

ENV FLASK_ENV=development
ENV FLASK_DEBUG=1

docker-compose.yml:

services:
  flask-backend:
    volumes:
      - ./Backend:/app  # Hot reload enabled

Backend/main.py:

app.run(host="0.0.0.0", port=5000, debug=True)

Result: Code changes now reload automatically without container restart.


Common Issues

Backend won't start

Error: MongoConnectionError: Cannot connect to MongoDB

Solution:

# Check MongoDB is running in Docker
docker-compose ps

# Check MongoDB logs
docker-compose logs mongo-db

# Restart services
docker-compose down
docker-compose up -d

Error: ImportError: No module named 'flask'

Solution:

# Rebuild Docker images
docker-compose down
docker-compose build --no-cache flask-backend
docker-compose up -d

Celery worker issues

Error: Celery worker not processing tasks

Solution:

# Check Redis is running
docker-compose exec flask-backend redis-cli ping
# Should return "PONG"

# Check Celery worker logs
docker-compose logs -f flask-backend

# Restart Celery (included in flask-backend container)
docker-compose restart flask-backend

Frontend issues

Error: npm ERR! Cannot find module

Solution:

cd Frontend
rm -rf node_modules package-lock.json
npm install
npm run dev

Error: CORS policy blocked

Solution: Verify backend CORS in Backend/main.py:

CORS(app, origins=["http://localhost:3000"], supports_credentials=True)

Scan failures

Error: FFUF results showing "N/A"

Potential Cause: Missing wordlist file.

Solution:

# Check if wordlist exists
docker exec flask-backend ls -la /app/tools/wordlists/

# Download common wordlist
docker exec flask-backend wget \
  https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/common.txt \
  -O /app/tools/wordlists/common.txt

# Or mount your wordlists
# In docker-compose.yml:
# volumes:
#   - ./Backend:/app
#   - ./wordlists:/app/tools/wordlists

Error: Tool not found for custom tools

Solution:

# Verify tool is installed
docker exec flask-backend which <tool-name>

# Check environment detection
docker exec flask-backend python3 -c "from Backend.utils.automated_tools.env_helper import is_docker; print(f'Docker: {is_docker()}')"

# Manually install missing tool
docker exec -it flask-backend bash
apt update && apt install <tool-name>

Email not sending

Error: SMTPAuthenticationError

Solution:

  • Verify Gmail App Password (not regular password)
  • Enable 2FA and generate App Password: https://myaccount.google.com/apppasswords
  • Check SMTP settings in Backend/.env:
    MAIL_USERNAME=your-email@gmail.com
    MAIL_PASSWORD=your-app-password
    
  • Verify firewall allows port 587

Debugging Tips

Enable Debug Logging:

Backend:

# Backend/main.py
app.config['DEBUG'] = True

Celery:

celery -A celery_worker.celery_app worker --loglevel=debug

Frontend:

npm run dev -- --debug

Check Logs:

# Docker logs
docker-compose logs -f backend
docker-compose logs -f frontend

# System logs
tail -f Backend/logs/app.log
tail -f /var/log/mongodb/mongod.log

Database Inspection:

# Connect to MongoDB
mongosh
use subdomain_scanner
db.scans.find().pretty()
db.users.find().pretty()

🔒 Security Best Practices

1. Authorization & Legal Compliance

NEVER:

  • Scan targets without written permission
  • Test production systems without approval
  • Share vulnerability findings publicly before disclosure
  • Use the tool for illegal purposes

ALWAYS:

  • Obtain explicit authorization
  • Document permission (contracts, emails)
  • Follow responsible disclosure practices
  • Respect bug bounty program rules
  • Comply with local laws

2. API Key Security

# Rotate keys periodically
# Every 90 days recommended

# Use environment-specific keys
# prod.env, staging.env, dev.env

# Never commit secrets
git add .env  # ❌ NEVER!

# Use secret managers for production
# AWS Secrets Manager, HashiCorp Vault, etc.

3. Rate Limiting

# Implement aggressive rate limiting
RATE_LIMITS = {
    "subdomain_enum": "100/hour",
    "port_scan": "50/hour",
    "vuln_scan": "10/hour",
    "api_requests": "1000/day"
}

4. Network Security

# Use VPN for scanning
# Prevents IP exposure

# Implement egress filtering
# Block outbound to unintended targets

# Use proxy chains
# Add anonymity layer

5. Data Protection

# Encrypt sensitive data at rest
from cryptography.fernet import Fernet

# Encrypt API keys in database
# Hash passwords (never store plaintext)
# Sanitize user inputs
# Implement access controls

6. Audit Logging

# Log all critical operations
import logging

logging.info(f"User {user_id} started scan on {target}")
logging.warning(f"Critical vulnerability found: {vuln}")
logging.error(f"Scan failed: {error}")

7. Incident Response

Have a plan for:

  • Accidental unauthorized scans
  • API key compromise
  • Data breaches
  • Service abuse
  • Legal inquiries

8. Production Deployment

# Use HTTPS only
# Implement WAF
# Regular security updates
# Backup data regularly
# Monitor for anomalies
# Implement intrusion detection

🤝 Contributing

We welcome contributions from the community!

How to Contribute

  1. Fork the Repository

    # Click "Fork" on GitHub
    git clone https://github.com/YOUR_USERNAME/Vulnerability_Scanner.git
  2. Create Feature Branch

    git checkout -b feature/amazing-feature
  3. Make Changes

    • Write clean, documented code
    • Follow existing code style
    • Add tests for new features
    • Update documentation
  4. Test Thoroughly

    # Run tests
    python -m pytest
    npm test
  5. Commit Changes

    git add .
    git commit -m "feat: add amazing feature"
  6. Push to Fork

    git push origin feature/amazing-feature
  7. Create Pull Request

    • Go to original repository
    • Click "New Pull Request"
    • Describe your changes
    • Link related issues

Contribution Guidelines

Code Style:

  • Python: Follow PEP 8
  • TypeScript/React: Follow Airbnb style guide
  • Use meaningful variable names
  • Comment complex logic
  • Write self-documenting code

Commit Messages:

feat: add new feature
fix: fix bug
docs: update documentation
style: format code
refactor: refactor code
test: add tests
chore: update dependencies

Pull Request Template:

## Description
Brief description of changes

## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update

## Testing
Describe testing performed

## Screenshots
If applicable, add screenshots

Areas for Contribution

  • 🐛 Bug fixes
  • ✨ New features
  • 📝 Documentation improvements
  • 🧪 Additional tests
  • 🎨 UI/UX enhancements
  • 🔧 Performance optimizations
  • 🌍 Internationalization
  • 🔌 New tool integrations

Code of Conduct

  • Be respectful and inclusive
  • Provide constructive feedback
  • Focus on the code, not the person
  • Help newcomers learn
  • Follow ethical security practices

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

MIT License

Copyright (c) 2026 Vulnerability Scanner Project

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

📞 Support & Contact

Getting Help

Community

  • ⭐ Star this repository
  • 🍴 Fork and contribute
  • 📢 Share with security community
  • 🐦 Follow for updates

🙏 Acknowledgments

This project integrates and builds upon amazing open-source tools:

  • OWASP ZAP - Web application security scanner
  • Nmap - Network exploration and security auditing
  • Subfinder - Subdomain discovery by ProjectDiscovery
  • HTTPx - HTTP toolkit by ProjectDiscovery
  • DNSx - DNS toolkit by ProjectDiscovery
  • Knockpy - DNS reconnaissance
  • WhatWeb - Web fingerprinting
  • Nikto - Web vulnerability scanner
  • FFuF - Web fuzzer
  • testssl.sh - TLS/SSL testing
  • WPScan - WordPress security scanner
  • Cohere AI - Language model for intelligent analysis
  • Flask - Python web framework
  • React - Frontend library
  • MongoDB - Database
  • Redis - Message broker
  • Celery - Distributed task queue

Special thanks to the entire security research community!


⚖️ Disclaimer

This tool is provided for educational and authorized security testing purposes only.

Important:

  • 🚫 Unauthorized scanning is illegal
  • ⚠️ Always obtain written permission
  • 📋 Follow responsible disclosure
  • 🔒 Respect privacy and data protection laws
  • ⚖️ Comply with local regulations
  • 🎯 Use ethically and responsibly

The authors and contributors are not responsible for misuse of this tool. Users are solely responsible for ensuring legal compliance.


🌟 Star History

Star History Chart


Made with ❤️ for the security community

Star this repository if you find it helpful!

Last Updated: January 12, 2026


⬆ Back to Top

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors