Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leaky bucket (feature request) #159

Open
vpctorr opened this issue Jul 27, 2023 · 6 comments
Open

Leaky bucket (feature request) #159

vpctorr opened this issue Jul 27, 2023 · 6 comments

Comments

@vpctorr
Copy link

vpctorr commented Jul 27, 2023

Hey there, love the library, it's incredibly useful!

By any chance, would you consider adding some kind of "leaky bucket" rate limiter?

Essentially, it would be very close to the current rateLimit decorator. Except that when the limit is exceeded, instead of throwing or calling exceedHandler, it would just wait for the remaining time before calling the decorated method.

Please let me know if that could be worked out, thanks!

@vlio20
Copy link
Owner

vlio20 commented Jul 27, 2023

Hi, is this exactly what's throttleAsync is for?

@vpctorr
Copy link
Author

vpctorr commented Jul 27, 2023

The use case is rather to limit calls to a remote API which allows for, say, 10 calls per second.

With throttleAsync there is no way to set the desired throughput, and setting the limit to 10 would only lead to rejections from the API if one of the calls take less than a second to complete.

@vlio20
Copy link
Owner

vlio20 commented Jul 27, 2023

Maybe I am missing something. For a given limit, the throttleAsync decorator will invoke the decorated method just for the allowed amount of times. Once one of the promises will be resolved the next call in the queue will be invoked. This basically means that there won't be any rejections. Once the amount of calls will reach to the limit the new calls will be queued.

https://vlio20.github.io/utils-decorators/#throttleAsync

@vpctorr
Copy link
Author

vpctorr commented Jul 27, 2023

Apologies if that wasn't clear 😅 Here's some pseudocode which might help a bit:

getTwitterData(): Promise<Response> {
  return await fetch("https://api.twitter.com/tweets/20")
  //some api endpoints that allows 10 calls per second
  //this resolves only in a few milliseconds
}

// this ensures that only 10 calls per second are made, should return 200 OK
@leakyBucket({
  timeSpanMs: 1000,
  allowedCalls: 10,
})
getTwitterData(): Promise<Response> {} 

// this does not prevent more than 10 calls per second, subsequent calls will return 429 Too Many Requests
@throttleAsync(10)
getTwitterData(): Promise<Response> {} 

@vlio20
Copy link
Owner

vlio20 commented Jul 27, 2023

Good point, throttleAsync will prevent that the number of concurrent call will acceed the specified number of calls it totally ignores the time perspective.

Great use-case. I would love to add it to the library.

@vlio20
Copy link
Owner

vlio20 commented Aug 1, 2023

@vpctorr would you like to submit a PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants