A web crawler and scraper for Rust
-
Updated
Nov 15, 2024 - Rust
A web crawler and scraper for Rust
Dyer is designed for reliable, flexible and fast web crawling, providing some high-level, comprehensive features without compromising speed.
Spider ported to Python
Rust Web Crawler saving pages on Redis
A small library for building fast and highly customizable web crawlers
A simple trap for web crawlers
LinkCollector is web-crawler which collects links of given host recursively
🌊 ~ seaward is a crawler which searches for links or a specified word in a website.
Crawls websites recursively. High Performance, with seed DB and store into index. Written in Rust.
A CLI tool for inspecting and analyzing web links.
🕷️ Crawls websites for URLs, and stores them in a textfile.
Multi-threaded Web crawler with support for custom fetching and persisting logic
Simple binary that allows recursively crawling a webpage, while searching for a keyword. Multiple pages are crawled efficiently and concurrently
Add a description, image, and links to the web-crawler topic page so that developers can more easily learn about it.
To associate your repository with the web-crawler topic, visit your repo's landing page and select "manage topics."