Scrapoxy is a super proxy aggregator, allowing you to manage all proxies in one place π―, rather than spreading it across multiple scrapers πΈοΈ.
It also smartly handles traffic routing π to minimize bans and increase success rates π.
ππ GO TO SCRAPOXY.IO FOR MORE INFORMATION! ππ
Scrapoxy supports many datacenter providers like AWS, Azure, or GCP.
It installs a proxy image on each datacenter, helping the quick launch of a proxy instance. Traffic is routed to proxy instances to provide many IP addresses.
Scrapoxy handles the startup/shutdown of proxy instances to rotate IP addresses effectively.
Scrapoxy supports many proxy services like Rayobyte, IPRoyal or Zyte.
It connects to these services and uses a variety of parameters such as country or OS type, to create a diversity of proxies.
Scrapoxy supports many 4G proxy farms hardware types like Proxidize.
It uses their APIs to handle IP rotation on 4G networks.
Scrapoxy supports lists of HTTP/HTTPS proxies and SOCKS4/SOCKS5 proxies.
It takes care of testing their connectivity to aggregate them into the proxy pool.
Scrapoxy only routes traffic to online proxies.
This feature is useful with residential proxies. Sometimes, proxies may be too slow or inactive. Scrapoxy detects these offline nodes and excludes them from the proxy pool.
Scrapoxy automatically changes IP addresses at regular intervals.
Scrapers can have thousands of IP addresses without managing proxy rotation.
Scrapoxy monitors incoming traffic and automatically scales the number of proxies according to your needs.
It also reduces proxy count to minimize your costs.
Scrapoxy can keep the same IP address for a scraping session, even for browsers.
It includes HTTP requests/responses interception mechanism to inject a session cookie, ensuring continuity of the IP address throughout the browser session.
Scrapoxy injects the name of the proxy into the HTTP responses.
When a scraper detects that a ban has occurred, it can notify Scrapoxy to remove the proxy from the pool.
Scrapoxy intercepts HTTP requests/responses to modify headers, keeping consistency in your scraping stack. It can add session cookies or specific headers like user-agent.
Scrapoxy measures incoming and outgoing traffic to provide an overview of your scraping session.
It tracks metrics such as the number of requests, active proxy count, requests per proxy, and more.
Scrapoxy displays the geographic coverage of your proxies to better understand the global distribution of your proxies.
Scrapoxy is suitable for both beginners and experts.
It can be started in seconds using Docker, or be deployed in a complex, distributed environment with Kubernetes.
And of course, Scrapoxy remains free and open source, under the MIT license.
I simply ask you to give me credit if you redistribute or use it in a project π.
A warm thank-you message is appreciated as well ππ.
More information on scrapoxy.io.
Want to contribute? Check out the guide!
Scrapoxy is an open-source project. The project is free for users, but it does come with costs for me.
I invest significant time and resources into maintaining and improving this project, covering expenses for hosting, promotion, and more.
If you appreciate the value Scrapoxy provides and wish to support its continued development, discuss new features, access the roadmap, or receive professional support, please consider becoming a sponsor!
Your support would greatly contribute to the project's sustainability and growth:
I would like to thank all the contributors to the project and the open-source community for their support.