regex Hunter- Fast website endpoint sensitive data scraper
The tool in question was created in javascript (node js), python and shell scripting and its main objective is to search for sensitive data and API keys in JavaScript files and HTML pages.
One Directory after there must be a list of website file need. ( That time no required )
├── assets
│ ├── LICENSE.txt
├── package.json
├── package-lock.json
├── README.md
├── run.sh
└── tools.js
- Super Faster
- node js
- python
- GoSpider Tools OR
- katana Tools
( )
)\ ) ( /( )
(()/(( ( ( ) )\()) ( ( /( ( (
/(_))\))( ))\( /( ((_)\ ))\ ( )\())))\ )(
(_))((_))\ /((_)\()) _((_)/((_) )\ )(_))//((_|()\
| _ \(()(_|_))((_)\ | || (_))( _(_/(| |_(_)) ((_)
| / _` |/ -_) \ / | __ | || | ' \)) _/ -_)| '_|
|_|_\__, |\___/_\_\ |_||_|\_,_|_||_| \__\___||_|
|___/
Author: @SecurityTalent
join_us: https://t.me/Securi3yTalent
Usage: ./run.sh <inputFilePath> <input_file1> [<input_file2> ...] ** Fast need to sorting url's
Usage: ./run.sh [-h] [-t] [-s] [-i] example: run.sh -t | run.sh -s | run.sh -i
Options:
-h, --help Display this help message
-t Identify the technology used by websites
-s Find the endpoint sensitive data
-i Convert domains to IP addresses and save to file
Fast need clone the repo
git clone https://github.com/securi3ytalent/regexHunter.git
cd regexHunter
Dependency install
npm i
pip install builtwith
apt install jq
sorting the url's
./run.sh <inputFilePath/gospiderOutputfile.txt>
example:
./run.sh /root/Desktop/regexHunter/katanaOrGoSpiderList.txt
Find the endpoint sensitive data and API
./run.sh -s
Identify the technology
./run.sh -t
Convert domains to IP addresses
./run.sh -i
One liner beautifier command
cat result.txt | jq -r 'select(.src != "" and .match != "" and .key != "") | "Source: \(.src)\nMatch: \(.match)\nKey: \(.key)\n"'
regexHunter is released under MIT license. See LICENSE.