-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a delay before search_all_tweets calls #1688
Comments
I want to +1 this. I get 429's with:
but not with:
|
I'm aware of this issue, but I haven't determined the best way to resolve it yet. |
That makes sense. Maybe just some documentation, then? |
Upon consideration, it might not be best to handle this within Tweepy. There might be users who do processing within the loop that takes up a significant amount of time or even takes longer than a second, so a simple 1 second sleep wouldn't be ideal. The alternative would be saving the timestamp that the last request was made at and sleeping until a second has passed, but that might end up being exactly a second and there would need to be considerations for adding some jitter. I think you're right and the simplest way forward right now would be to document it and allow the user to handle it themselves. PS@jdfoote I saw your video 👍 Some notes:
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I've added a FAQ section on this for now. |
@Harmon758 This rate limit should be present in at least the |
search_all_tweets
has a 1 call per second limit. Currently,tweepy
quickly makes a few calls, receives a429
, and then waits for nearly 15 minutes. By simply adding a sleep of 1 second per call, you can make 300 calls in that 15 minutes.It seems like the place to put this may be in the
pagination.py
file, as the delay is only a problem when making multiple calls? I'd be happy to make a one-line PR, but I'm not sure where the devs would want to put the sleep.The text was updated successfully, but these errors were encountered: