Switch branches/tags
Nothing to show
Find file
Fetching contributors…
Cannot retrieve contributors at this time
54 lines (40 sloc) 2.72 KB


Einar Otto Stangvik


This is a rather naïve link scraper-driven web vulnerability scanner. Use it responsibly.


Usage: vulnscrape.rb [options]
    -u, --url URL                    The url to scan.
    -m, --max count                  Max urls to scrape for.
    -i, --skip count                 Numer of scraped urls to skip.
    -c, --scraper REGEX              Scraper restriction.
                                     Only scrape URLs matching REGEX.
    -r, --restriction REGEX          Url restriction
                                     Only collect URLs matching REGEX.
                                     Typically more restrictive than the scraper restriction.
    -k, --[no-]keep                  Keep duplicate urls.
                                     Enabling this will make the link collector keep urls with the same host and path.
                                     Default: false
    -h, --[no-]header                Include header heuristics. Default: false
    -p, --[no-]split                 Include response splitting heuristics. Default: false
    -n, --[no-]mhtml                 Include MHTML heuristics. Default: false
    -x, --[no-]hash                  Include hash heuristics. Default: false
    -q, --[no-]query                 Include query heuristics. Default: true
    -f, --[no-]fof                   Include 404 page. Default: true
    -s, --[no-]single                Single run. Default: false
        --user USERNAME              Basic auth username
        --pass PASSWORD              Basic auth password
        --cookie COOKIE              Cookie string
        --load FILENAME              Load urls from FILENAME
                                     The scraper can save urls using --save.
        --save FILENAME              Save urls to FILENAME
                                     Saved urls can be reloaded later with --load


Straight forward scan:

./vulnscrape.rb -u -m 50

Will scrape for at least 50 urls, and start running various heuristics on it.

./vulnscrape.rb -u -m 50 -c "https?://services\.mydomain\.com" -r "https?://([^.]*?\.)*?"

Will start scraping at, and only follow (continue scraping) urls on that subdomain. All links from all subdomains will eventually be run through the heuristics scanner.

./vulnscrape.rb -u -h -p -m -x

Includes query string, header, response splitting and hash heuristics, as demonstrated by a few of the XSS vectors on the progphp test site.