Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate-Limiting is not handled gracefully #52

Closed
hmmwhatsthisdo opened this issue Feb 15, 2018 · 2 comments · Fixed by #53
Closed

Rate-Limiting is not handled gracefully #52

hmmwhatsthisdo opened this issue Feb 15, 2018 · 2 comments · Fixed by #53

Comments

@hmmwhatsthisdo
Copy link
Contributor

Slack's API has a rate-limit of one message per second. PowerShell/PSSlack overruns even the (unspecified) burst-limit rather quickly, and doesn't make any attempt at waiting/re-trying the request.

Even more concerning is that Slack appears to threaten revoking API tokens if an application continues to send messages while being rate-limited.

It would likely be best to augment Send-SlackApi to be mindful of the burst/average rate-limit, and gracefully recover from HTTP 429 API responses.

@RamblingCookieMonster
Copy link
Owner

RamblingCookieMonster commented Feb 15, 2018

Hiyo!

Good catch! Completely agreed!

There might be some complexity though - A few initial thoughts:

  • Rate limit might need to take place at a higher level, although presumably we can write a function to do this for us. For example, unless we store persistent data on what is happening in the module, logic like this would bypass any limits in Send-SlackApi. Doing it at the Send-SlackMessage or other higher level functions would work... but again, could be bypassed by a user level function using Send-SlackMessage quickly (I certainly do this, oops!)
  • A generic pipeline rate-limiter that we could instruct users to use in cases of high volume might be worth considering. Maybe.

Anyhow! Totally open to ideas and PRs : D this would definitely be handy

Cheers!

@hmmwhatsthisdo
Copy link
Contributor Author

hmmwhatsthisdo commented Feb 15, 2018

So, a little bit of empirical testing appears to suggest that Slack's rate-limiting is using an algorithm similar to the leaky bucket algorithm, with a bucket size of 25 requests and a leak rate of 1 request per second. (I can reliably get around 30 requests in over a course of 5 seconds.)

My thoughts on how to handle rate-limiting on the client side are as follows:

  • Implement a leaky bucket of a similar dimensions inside the module. This would likely entail a dictionary of strings (API tokens/URLs) to integers (current bucket volume) and DateTime objects (last drip). If the bucket is too full, have Send-SlackApi block until 1 second after the last "drip" time.
  • Wrap Invoke-RestMethod in a try/catch to handle HTTP 429 responses from the Slack API. If we receive a 429, block for some amount of time (500ms/1s/etc.) and try again.
  • Some combination of 1 and 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants