Skip to content

WangYihang/Subdomain-Crawler

Repository files navigation

Subdomain Crawler

The program aims to help you collect subdomains of a list of given second-level domains (SLD).

Installation

  • Option 1: Download from GitHub Releases directly (Recommended)

  • Option 2: Go Install

    $ go install github.com/WangYihang/Subdomain-Crawler/cmd/subdomain-crawler@latest

Usage

  1. Edit input file input.txt
$ head input.txt
tsinghua.edu.cn
pku.edu.cn
fudan.edu.cn
sjtu.edu.cn
zju.edu.cn
  1. Run the program
$ subdomain-crawler --help
Usage:
  subdomain-crawler [OPTIONS]

Application Options:
  -i, --input-file=    The input file (default: input.txt)
  -o, --output-folder= The output folder (default: output)
  -t, --timeout=       Timeout of each HTTP request (in seconds) (default: 4)
  -n, --num-workers=   Number of workers (default: 32)
  -d, --debug          Enable debug mode
  -v, --version        Version

Help Options:
  -h, --help           Show this help message

$ subdomain-crawler
  1. Check out the result in output/ folder.
$ head output/*