Recursive Web Crawler is a Python-based tool for exploring websites recursively and extracting useful information such as subdomains, links, and JavaScript files. This tool is intended for web security professionals and web developers who want to examine the structure and dependencies of websites.
Recursive.Web.Crawler.mp4
- Recursively crawl websites to a specified depth.
- Extract subdomains, links, and JavaScript files.
- Flexible depth parameter for customizing the level of recursion.
Run the Recursive Web Crawler with the following command:
python main.py -u <URL> -d <DEPTH>
Example
python main.py -u "'https://tryhackme.com -d 2
If you'd like to contribute to this project, please open an issue or create a pull request.