New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add extra useragent. #152
Comments
Maybe you should also try to limit the requests / seconds / minutes you do. Since your IP is banned now, no fake useragent strings will help you with that. If you are using scrapy framework for example, you have an option like
See also another scrapy option called If however you use your own scripting without scrapy, consider adding sleeps to your crawling process. |
Also are you using Amazon AWS? |
I use the googlesearch library python, which is based on requests and beautifulsoup; And I have also used time.sleep. I am trying to prevent IP banning by using fake useragent and proxy. thanks for the help. |
Hi.
My crawling process includes many requests and despite using a fake, my IP is still blocked.
Please add more fake users like iPhone and Android devices fake user agent.
For example, look at the fake useragents on this site:
And please add the ability to delete a fake useragent from list of fake useragents; in order to prevent this fake user from being used again;
and to avoid being blocked.
Thankful
The text was updated successfully, but these errors were encountered: