Skip to content

A lightweight Node.js implementation of web crawler used for exploring and analyzing web pages.

Notifications You must be signed in to change notification settings

markogra/webCrawlerHttp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

webcrawlerhttp

Using the jsdom library for HTML parsing, the crawler starts from a specified base URL, recursively follows links, and counts the occurrences of each visited URL. The crawler distinguishes between relative and absolute URLs and normalizes them for accurate tracking. It logs any encountered errors during the crawling process. The project is suitable for basic web scraping tasks and exploration of a given domain.

I used Jest to make sure the web crawler works correctly by testing its URL handling functions.

About

A lightweight Node.js implementation of web crawler used for exploring and analyzing web pages.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published