Skip to content

Bug Bounty Methodology

0xhelloworld edited this page Apr 12, 2019 · 1 revision

Go the road less travelled, find programs that are not on hackerone or bugcrowd: google: "Responsible Disclosure" or "Vulnerability Disclosure" or "responsible disclosure website list"

Hackerone In Scope Domains:

Google Dork:

responsible disclosure "bounty"

Responsible Disclosure seems to give best results.

intext:”Responsible Disclosure Policy”

"responsible disclosure" "private program" 

"responsible disclosure" "private" "program"

Google Dork:

vulnerability disclosure program "bounty" -bugcrowd -hackerone

responsible disclosure "private program"                                         <--- find private hackerone/bugcrowd programs

Google Dorker:

Searching through source code:

If you find a vulnerable javascript script and want to find other websites vulnerable to the same script, you can use this website to search for other websites running the same script.

Subdomain Enumeration:

Basic Subdomain Scraping


./amass -active -d -o /opt/output/


./subfinder -d -v -o /opt/output/        <--- simple scan 
./subfinder -b -w /opt/wordlists/all.txt -d -v -o /opt/output/       <--- indepth scan


python -b -d -v -t 40 -o /opt/output/
python -p 21,22,3389,8080,8181,8000,9443,8443,6900,9200,81


aquatone-discover -d

curl '' | jq '.name_value' | sed 's/\"//g' | sed 's/\*\.//g' | sort -u 

Enumeration with aquatone:

Subdomain bruteforcing:


cd /opt/massdns
./scripts/ /opt/wordlists/all.txt | ./bin/massdns -r lists/resolvers.txt -t A -o S -w canvaMassdns.txt


gobuster -m dns -u -t 100 -w /opt/wordlists/all.txt -o /opt/output/ -q 

Don't forget to permutation scan: Ex:,, then bruteforce!

Utilizing jason haddix's all.txt wordlist for subdomain bruteforcing

Subdomain Analysis:

Subdomain Screenshotting:


cat aquatoneSublist.txt | httprobe -c 50 > livehosts.txt


./ --prepend-https -f /root/vanillasublister.txt --web --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36" -d targetvanilla

Multithreaded Eyewitness:

./ --prepend-https -f /root/vanillasublister.txt --web --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36" --threads 35 -d targetvanilla


webscreenshot -i /tmp/adobeurls.txt -o /targets/ -v -w 10

webscreenshot -i /tmp/adobeurls.txt -o /targets/ -v -m -w 10(HTTP & HTTPS)

epg-prep /root/

node yourname.js


Port Scanning:

Ports to Find: 21,22,3389,8080,8181,8000,9443,8443,6900,9200,81


masscan -p- --banners targets.txt


nmap -p 21,22,3389,8080,8181,8000,9443,8443,6900,9200,81 -iL targets.txt 


aquatone-scan -d -t 30 -p medium  

aquatone-scan -d -t 30 -p small (small is port 443 and 80)

Shodan Queries

port:80,443,2376,8000,8080,8443,9443 http.title:Company

Subdomain Misconfigurations

Subdomain Takeover


aquatone-takeover -d

CyberInt Takeover

CORS Testing:

Querying Wayback Machine (if subdomain name indicates critical data or, try looking at it from wayback machine. may show critical data (API keys, user/pass) (if website returns 403, try google dorking the website to see if there is any endpoints you can access)

Endpoint Discovery

Directory Bruteforcing:

Burp Discover Content Engagement Tool
./ -u -e * -r
gobuster -w /opt/wordlists/content_discovery_all.txt -a "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36" -t 50 -u -o canvagobuster.txt -fw -r <---- jason haddix directory bruteforce list

File Extensions Via Google ext:php,asp,aspx,jsp,jspa,txt,swf ext:php ext:asp

Querying Wayback Machine


curl '*&output=text&fl=original&collapse=urlkey'

^^^ more info

Query with

You can query to discover endpoints as well

python3 -y 18 -o github_2018.txt

Third party misconfigurations

Github Recon: inurl:looker "api" "key" inurl:looker "password"

AWS buckets:

Trello Look for invite links: Slack, Discord, etc

Javascript Files:

Link Finder & JSParser

Target Tab > Right Click > Engagement Tools > Find Scripts
Ctrl A > Copy Selected URLs (Paste to textfile linkfinder.txt)
cat linkfinder.txt | grep .js > linkfinder2.txt
      python -o cli -i
      OR copy and paste into JSParser:
         python (visit localhost:8008)


Identifying CMS discover type of CMS running on website
Clone this wiki locally
You can’t perform that action at this time.