Skip to content
Simple web crawler to crawl website and organise URLs based on page extensions.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
SS
Links_Crawler.py
README.md
domain.js
index.html

README.md

Links-Crawler

Simple web crawler for finding endpoints.

Features:

  1. All crawled URLs are organized by page extensions.
  2. All parameters of same URL are organized and displayed together.

#Running from terminal

Alt text

#Accessing crawled links

Alt text

Alt text

#Installation

pip3 install nyawc

git clone https://github.com/rakeshmane/Links-Crawler.git

cd Links-Crawler

python3 Links_Crawler.py

You can’t perform that action at this time.