Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
105 lines (81 sloc) 5 KB
Go the road less travelled, find programs that are not on hackerone or bugcrowd:
google: "Responsible Disclosure" or "Vulnerability Disclosure" or "responsible disclosure website list"
google: responsible disclosure "bounty"
Responsible Disclosure seems to give best results.
intext:”Responsible Disclosure Policy”
"responsible disclosure" "private program"
"responsible disclosure" "private" "program"
Google Dork:
vulnerability disclosure program "bounty" -bugcrowd -hackerone
responsible disclosure "private program" <--- find private hackerone/bugcrowd programs
Google Dorker:
Subdomain Enumeration:
./amass -active -v -d OR /root/go/bin/amass -active -v -d
./subfinder -d OR /root/go/bin/subfinder -d
./subfinder -b -w /root/Desktop/jhaddixALL/subdomainsALL.txt -d -v
python -b -d -v -t 40 -o example.txt
python -p 21,22,3389,8080,8181,8000,9443,8443,6900
aquatone-discover -d Enumeration with aquatone:
Subdomain Analysis:
Subdomain bruteforcing:
./subfinder -d -b -dL jasonhaddixall.txt OR /root/go/bin/subfinder -d -b -dL jasonhaddixall.txt
Subdomain Analysis:
./ --prepend-https -f /root/vanillasublister.txt --web --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36" -d targetvanilla
Port Scanning:
nmap -p 21,22,3389,8080,8181,8000,9443,8443,6900 -iL targets.txt
aquatone-scan -d -t 30 -p medium
aquatone-scan -d -t 30 -p small (small is port 443 and 80)
webscreenshot -i /tmp/adobeurls.txt -o /targets/ -v
webscreenshot -i /tmp/adobeurls.txt -o /targets/ -v -m (HTTP & HTTPS)
epg-prep /root/
node yourname.js
http://yourserverip:3000/photos ext:php,asp,aspx,jsp,jspa,txt,swf (if subdomain name indicates critical data or, try looking at it from wayback machine. may show critical data (API keys, user/pass) (if website returns 403, try google dorking the website to see if there is any endpoints you can access)
-can also try searching wayback machine for endpoints via curl(
-curl '*&output=text&fl=original&collapse=urlkey'
^^^ more info
You can query to discover endpoints as well:
python3 -y 18 -o github_2018.txt
Subdomain Takeover:
aquatone-takeover -d
CORS Testing:
Directory Bruteforcing:
./ -u -e * -r
./ -u -e * -r -w /root/Desktop/jhaddixALL/directoriesjhaddix.txt --plain-text-report=/root/Desktop/report
Finding directories:
./ -u -L /root/jhaddix/jhaddixdirectories.txt <---- jason haddix directory bruteforce list
./ -u -e * -r -w /opt/tools/directorywordlists/raft-medium-directories.txt --plain-text-report=/root/Desktop/report <---- bruteforce directory
dirb 10 threads, jason haddix wordlist
Once new directories are found, find files in those directories:
./ -u -r -w /opt/tools/directorywordlists/raft-medium-files.txt --plain-text-report=/root/Desktop/report <---- bruteforce files
./ -u -e * -r
Find scripts: ext:php ext:asp
github recon: inurl:looker "api" "key" inurl:looker "password"
Endpoint Discovery:
Target Tab > Right Click > Save Selected Items
python -o cli -i burpfile
Link Finder
Target Tab > Right Click > Engagement Tools > Find Scripts
Ctrl A > Copy Selected URLs (Paste to textfile linkfinder.txt)
cat linkfinder.txt | grep .js > linkfinder2.txt
python -o cli -i
OR copy and paste into JSParser:
python (visit localhost:8008)
=============================================== discover type of CMS running on website
You can’t perform that action at this time.