Skip to content

"Recursive Web Crawler: A Python tool for deep website exploration, finding subdomains, links, and JavaScript files. Ideal for security and web development."

Notifications You must be signed in to change notification settings

calc1f4r/Recusive-web-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

Recursive Web Crawler

Recursive Web Crawler is a Python-based tool for exploring websites recursively and extracting useful information such as subdomains, links, and JavaScript files. This tool is intended for web security professionals and web developers who want to examine the structure and dependencies of websites.

Recursive.Web.Crawler.mp4

Features

  • Recursively crawl websites to a specified depth.
  • Extract subdomains, links, and JavaScript files.
  • Flexible depth parameter for customizing the level of recursion.
Usage

Run the Recursive Web Crawler with the following command:

python main.py -u <URL> -d <DEPTH>

Example

python main.py -u "'https://tryhackme.com -d 2
Contributing

If you'd like to contribute to this project, please open an issue or create a pull request.

About

"Recursive Web Crawler: A Python tool for deep website exploration, finding subdomains, links, and JavaScript files. Ideal for security and web development."

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages