Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

respect rate limits during pagination by sleeping #167

Merged
merged 9 commits into from Sep 19, 2017

Conversation

jmanian
Copy link
Collaborator

@jmanian jmanian commented Sep 18, 2017

resolves #166

Here's an initial attempt at respecting rate limits during pagination. No tests here yet.

@dblock
Copy link
Collaborator

dblock commented Sep 18, 2017

This is a good start, it needs some tests, CHANGELOG and such.

Maybe we can also hint a sleep value in the options passed into the function and avoid TooManyRequestsError in most cases?

@jmanian
Copy link
Collaborator Author

jmanian commented Sep 18, 2017

That's a good idea about hinting a sleep value, though we may find it hard to tune to avoid the errors. The reason I say this is I asked some Slack folks about their rate limiting a while back and they had this to say:

Our rate limiting is pretty brute. At the same time, we don't consider getting rate limited a ding or demerit. Some devs might just want to burn through as fast as possible until hitting a rate limit, then pausing activity before resuming, theoretically, after following the Retry-after. (source)

The specific rate limits for the web API aren't actually published and can vary somewhat method to method, with some burst behavior allowed here and there. By virtue of being unpublished, the best recommendation is to make requests until you hit a limit, then backoff until the time the retry-after indicates. (source)

Given this context, I think the best solution is to allow the individual dev to pass in their own default sleep time, so that they can tune to their use case, but not have any default sleep behavior beyond the rate limiting.

@dblock
Copy link
Collaborator

dblock commented Sep 18, 2017

I agree, lets default the hint to zero.

I wonder whether retry behavior could be optionally built into everything? As a client I just don't want to deal with it :) But that can be a later PR/feature - feel free to open an issue for that too.

@jmanian
Copy link
Collaborator Author

jmanian commented Sep 18, 2017

I think this is what you had in mind for hinting a sleep value. I went with pause for the parameter so as to not overload sleep with an attribute on top of the method, but I'm happy to change this.

Copy link
Collaborator

@dblock dblock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting much closer, hang on in there. Something should also be added in README around pagination about this, at least to document the new pause option(s).

CHANGELOG.md Outdated
@@ -1,5 +1,6 @@
### 0.9.2 (Next)

* [#167](https://github.com/slack-ruby/slack-ruby-client/pull/167): Respect rate limits during pagination by sleeping. Also add optional `pause` parameter in order to proactively sleep between each paginated request - [@jmanian](https://github.com/jmanian).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe simply "Added support for pausing between paginated requests that cause Slack rate limiting"?

@@ -7,23 +7,31 @@ class Cursor

attr_reader :client
attr_reader :verb
attr_reader :pause
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel a bit uncomfortable calling this pause because it's unclear whether that's a verb or a noun, maybe sleep_interval? Don't feel strongly about this one though.

response = client.send(verb, query)
rescue Slack::Web::Api::Errors::TooManyRequestsError => e
sleep(e.retry_after.seconds)
next
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think we should have a max retry count with some default set fairly high. If slack blacklists you somehow you'll have code that never ends and it will be impossible to debug.

I also think there should be a log line here in DEBUG mode.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think the count should reset after a successful request?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so.

Copy link
Collaborator

@dblock dblock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last problem with params, then I'm merging. Thanks!

attr_reader :params

def initialize(client, verb, params = {})
@client = client
@verb = verb
@sleep_interval = params.delete(:sleep_interval)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This modifies incoming params, ie. has a side-effect, you don't want to do that, so just params[:sleep_interval].

retry_count += 1
sleep(e.retry_after.seconds)
next
end
yield response
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can move this inside the rescue block and avoid next, right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also did I tell you that I was a nitpicky code reviewer? Hang on tight :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand, I'm the same way 😎.

Which part are you suggesting go in the rescue block? I started out with everything inside the begin block (see first commit b75bfcb) which avoids the next. Is this what you mean?

I changed it (8542d00) because I thought it was bad practice to have more lines than necessary inside the begin block — even if just for clarity (so that it's clear which line is expected to raise the error). But in this case it's obvious where the error is coming from, so I guess it's fine either way.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually now think yours is better. Someone can do something inside the yield that causes that exception and we're going to have bad side effects and catching an error we shouldn't.

@dblock dblock merged commit 201c751 into slack-ruby:master Sep 19, 2017
@dblock
Copy link
Collaborator

dblock commented Sep 19, 2017

Merged. Thanks!

@dblock
Copy link
Collaborator

dblock commented Sep 19, 2017

I made a small change to dup the params and use delete in cdb697b.

@jmanian
Copy link
Collaborator Author

jmanian commented Sep 19, 2017

I was debating just leaving those parameters in there, because I think all the API methods ignore those extraneous parameters. But it felt more right to remove them.

@dblock
Copy link
Collaborator

dblock commented Sep 19, 2017

I opened #168 - interested in giving that one a try?

@jmanian jmanian deleted the pagination_rate_limiting branch September 19, 2017 19:37
@icybin
Copy link

icybin commented Sep 20, 2017

@dblock Is this problem related to login/authentication stage? I noticed that when I deployed the bot (ruby-slack-bot) on my laptpo, the bot could be connected to Slack Api. (well, may laptop network is slow)

However, on AWS ec2 where the network is extremely fast, the first 2 requests to Slack happen in within 1 second (I can see this when I turn on DEBUG mode), and then Slack simply blocks the future 3rd request. The bot will never be able to authenticate to the service.

@dblock
Copy link
Collaborator

dblock commented Sep 20, 2017

@kyanh-huynh #168 would mitigate that as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

rate limiting with pagination
3 participants