Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: x/time/rate: add tokens apart of time #53204

Open
clwluvw opened this issue Jun 2, 2022 · 13 comments
Open

proposal: x/time/rate: add tokens apart of time #53204

clwluvw opened this issue Jun 2, 2022 · 13 comments

Comments

@clwluvw
Copy link

clwluvw commented Jun 2, 2022

What did you do?

I'm trying to add tokens to the bucket apart of the time so this can help to have distributed rate-limiting so that another process can update tokens in the other one if the token is used there.

What did you expect to see?

There can be a function called AddToken that let you add tokens to the bucket apart of the time, but because the scenarios that anyone can have are different (like using AllowN and not waiting to fill the tokens again) there should be also a function called Tokens to return the current tokens apart of time so based on the current tokens there can be a possibility to decrease the tokens apart of the time.

// AddTokens will decrease the remaining tokens by n
func (lim *Limiter) AddTokens(n float64) {
	lim.tokens -= n
}

// Tokens will return the current Limiter tokens apart of time
func (lim *Limiter) Tokens() float64 {
	return lim.tokens
}

func (lim *Limiter) AddToken() {
	lim.AddTokens(1)
}

If it makes sense I can propose my change for it. I've tested it locally and I could achieve my goal with it!

@clwluvw clwluvw added the pkgsite label Jun 2, 2022
@gopherbot gopherbot added this to the Unreleased milestone Jun 2, 2022
@seankhliao seankhliao changed the title x/time/rate: add tokens apart of time proposal: x/time/rate: add tokens apart of time Jun 2, 2022
@seankhliao seankhliao removed the pkgsite label Jun 2, 2022
@seankhliao seankhliao modified the milestones: Unreleased, Proposal Jun 2, 2022
@seankhliao
Copy link
Member

cc @Sajmani

@Sajmani
Copy link
Contributor

Sajmani commented Jun 3, 2022

I'm sorry but I do not understand this proposal. There is an accepted proposal to expose Tokens, but not one to change the number of tokens directly:
#50035 (comment)

@clwluvw
Copy link
Author

clwluvw commented Jun 3, 2022

@Sajmani well actually as I described above I wanted to add tokens to the bucket (limiter) apart of time which means for example If I have two threads that are using different limiters they can update each other by adding a token to their bucket (limiter).
Do you have any better idea to add tokens to the limiter apart from the time? because if you use AllowN or other functions they are based on the time so if another thread wants to add tokens it won't make sense.

A good example is to assume you have two HTTP servers and you want to rate-limit requests. Both of them are active and exposed to the user so they should have the same value of current req/s for rate-limiting otherwise user can heat the API double than what it should be.
With this proposal, we can add tokens to the buckets from other threads/processes so it can help to solve this problem.

@Sajmani
Copy link
Contributor

Sajmani commented Jun 3, 2022

Is the goal to enable balancing of rates across multiple instances of rate.Limiter (possibly in multiple processes)?

@clwluvw
Copy link
Author

clwluvw commented Jun 3, 2022

@Sajmani Actually the goal is to replicate the rates to the multiple instances of rate.Limiter by making fake tokens, so it can assume that there are more tokens gone without needing to call AllowN/ReserveN that updates the tokens by time durations.

@Sajmani
Copy link
Contributor

Sajmani commented Jun 3, 2022

Are you replicating the rates or the rate limits? SetLimit and SetBurst allow you to sync the rate limits across multiple rate limiters.

@clwluvw
Copy link
Author

clwluvw commented Jun 3, 2022

@Sajmani I'm not going to sync the limits and bursts with the rate.Limiter instances. I'm going to replicate/aggregate the current tokens (rate) between the rate.Limiter so if one token is gone from one rate.Limiter, it would be gone from the others as well.
So in this case no matter the user is hitting which rate.Limiter, all have the same consumed tokens (rate).

@Sajmani
Copy link
Contributor

Sajmani commented Jun 3, 2022

Ah, got it! So you have per-user rate limiters on multiple servers, and you want to deduct tokens on all servers whenever the user consumes tokens on any of the servers.
In theory if the user's requests were distributed evenly across the servers, then you could just set the per-user rate at each server to limit/(number of servers) and not do any synchronization.
But if you're using a load-balancing policy that's not evenly distributed (or intentionally keeps users on a single server for improved locality, cache use, DB client reuse, etc), then they will only consume tokens at the servers they're using.
If I'm understanding this correctly, the problem mainly occurs when the user's requests are distributed among a subset of K servers, is that right?
If so, wouldn't it be more efficient to update the rate at each of the servers that the users' requests are hitting to limit/K? This should involve a lot less synchronization than updating the tokens at each server, and you can use the existing SetLimit functionality.

@clwluvw
Copy link
Author

clwluvw commented Jun 3, 2022

the problem mainly occurs when the user's requests are distributed among a subset of K servers, is that right?

That is correct.

If so, wouldn't it be more efficient to update the rate at each of the servers that the users' requests are hitting to limit/K?

@Sajmani I didn't get that clearly what you mean. If you mean to stick a user to a server, actually, for scaling and being HA I'm going to expose 3 (n) load balancers to the user so a single client (source IP) can for example do 10k req/s and is being distributed (with some DNS tricks that let's skip this part =) ) to all of my load balancers (each 3.3k req/s). Now all of my load-balancers should be aware that the client (source IP) is doing 10k req/s not (3.3k req/s - or in some scenarios 1k on one server 3k on the other and 6k on the third one).
How can it be possible with SetLimit? In my implementation I'm setting the limit to 10k so how can setting a new limit help here? Can you provide me with an example, please?

@clwluvw
Copy link
Author

clwluvw commented Jun 8, 2022

ping @Sajmani =)

@ianlancetaylor ianlancetaylor added this to Incoming in Proposals (old) Jun 8, 2022
@blevz
Copy link

blevz commented Aug 9, 2022

Any updates here? We have some use cases for getting current number of tokens for observability reasons

@Sajmani
Copy link
Contributor

Sajmani commented Aug 12, 2022

@blevz I think that's covered in #50035

@Sajmani
Copy link
Contributor

Sajmani commented Aug 13, 2022

@clwluvw Unfortunately, I really don't understand your use case. The approved proposal #50035 will add the Tokens accessor, but AddTokens does not seem like an appropriate addition. If you need it, please consider forking this package and modifying it yourself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Incoming
Development

No branches or pull requests

5 participants