Using the jsdom library for HTML parsing, the crawler starts from a specified base URL, recursively follows links, and counts the occurrences of each visited URL. The crawler distinguishes between relative and absolute URLs and normalizes them for accurate tracking. It logs any encountered errors during the crawling process. The project is suitable for basic web scraping tasks and exploration of a given domain.
I used Jest to make sure the web crawler works correctly by testing its URL handling functions.