Still under construction!
This project demonstrates web scraping in Rust, leveraging concurrent and parallel processing to efficiently extract data from websites.
Rust provides powerful features for building fast and safe web scrapers. This project utilizes concurrency and parallelism to speed up the scraping process, ensuring efficient utilization of system resources.
- Concurrent Web Scraping: Utilizes Rust's asynchronous programming features to perform non-blocking web requests.
- Parallel Processing: Leverages multiple CPU cores to process data concurrently, significantly reducing the overall scraping time.
- Error Handling: Implements robust error handling to gracefully manage network and parsing errors.
- Interactive and iteractive search: uses a internal DB for gathering multiple dasources by refference and without overlap.
- Configurable: Allows customization of concurrency level, user agent, and timeout settings.