Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate Limiting #61

Open
grantcarthew opened this issue Apr 27, 2017 · 0 comments
Open

Rate Limiting #61

grantcarthew opened this issue Apr 27, 2017 · 0 comments

Comments

@grantcarthew
Copy link
Owner

Description

This issue contains my initial thoughts on implementing a distributed job process rate limiting feature within rethinkdb-job-queue. There may be errors in my thought process however rate limiting has been going through my mind recently and I wanted to get my thoughts down.

References

Is this really needed?

A distributed rate limiting feature does not need to be built within the rethinkdb-job-queue project. It can be achieved by using a parent/child queue relationship as described in this one comment on Kue issue 441 (ref 4 above).

With this in mind, adding this feature could be considered moving away from the KISS principle and adding complexity. However if this feature was popular then the added complexity would be worth it.

Implementation Thoughts

A limiting algorithm could be implemented in a bursty or non-bursty way. Thanks to the Queue concurrency option, bursting can be limited on the worker node. Due to this Queue feature and additional API options as seen in ref 5 and 6, using a bursty algorithm could be smoothed out. Using the token bucket algorithm (ref 1) would be ideal in this case.

The token bucket itself could be stored in the State Document as an extra field or group of fields and options.

Global rate limiting could be enabled in one location being the Master Queue. When the Master Queue is instantiated it would update the State Document with the limiting options and initial token bucket size.

When a worker node is instantiated part of its initialization could be to check the state document to see if global rate limiting is enabled. If global rate limiting is enabled the worker node decrements the token bucket by the worker concurrency value or the remaining tokens, which ever is the smaller amount.

The token bucket could be refilled by the Master Queue on an interval. Using multiple Master Queue objects would provide fault tolerance here.

The API used in the node modules in ref 5 and 6 above can be used as a starting point for rate limiting function and options. The parent/child token bucket options in ref 6 are a good example here.

Edits

I may edit this comment to add references or make changes. I will list edits below.

  • No edits yet!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant