-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix no slack option for int64 based option #95
Fix no slack option for int64 based option #95
Conversation
Codecov Report
@@ Coverage Diff @@
## main #95 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 4 4
Lines 97 95 -2
=========================================
- Hits 97 95 -2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm really sorry about delaying this.
This is pretty top on my priority list I'll really try to get this soon.
(will need to understand the testing/time changes too)
limiter_atomic_int64.go
Outdated
@@ -66,10 +66,10 @@ func (t *atomicInt64Limiter) Take() time.Time { | |||
timeOfNextPermissionIssue := atomic.LoadInt64(&t.state) | |||
|
|||
switch { | |||
case timeOfNextPermissionIssue == 0: | |||
// If this is our first request, then we allow it. | |||
case t.maxSlack == 0 && now-timeOfNextPermissionIssue > int64(t.perRequest): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still need to grok the code, but briefly looking at this, I wonder if we should do:
if t.maxSlack == 0 {
t.maxSlack == t.preRequest
}
during the initialization to get rid of one of the cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, no, because this will allow for one additional permission to accumulate.
In general, I think we can simplify this and other implementations if we change the way how we initialize rate limiter state, but this is a subject for a separate PR, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. I will need to really think about this more.
In your current proposal though, are we not changing the behavior for the very first request?
I might add tests for the first request only to formalize the behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In your current proposal though, are we not changing the behavior for the very first request?
Yup, I just mentioned a potential future simplification change that is not in the scope of this fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm less worried about implementation simplification than any unspecified behavior - I'd like to make sure that all implementation behave identically.
Tried to address initial few "Takes" in #97 - WDYT?
I might add one or two more tests before looking at the "time" changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm less worried about implementation simplification than any unspecified behavior - I'd like to make sure that all implementation behave identically.
This fix functional behaves identically to the previous behavior. It just applies the same behavior to cases when no one has used RL for a long time, so it accumulated many permissions. But this limiter has no slack, so we should not allow permissions to accumulate, so we need to move the newTimeOfNextPermissionIssue
to now
, as if RL was created right now.
e4cf9ea
to
22f95cb
Compare
See #95 (comment) From that discussion I wasn't sure whether the proposed the initial startup sequence of the limiter - i.e. whether at startup we always block, or always allow. Since we didn't seem to have that codified (perhaps apart from the `example_test.go`) this PR adds a test to verify this. This is still slightly (2/1000) flaky, but I think that's good enough to add this in - should be valuable anyway.
See #95 (comment) From that discussion I wasn't sure whether the proposed the initial startup sequence of the limiter - i.e. whether at startup we always block, or always allow. Since we didn't seem to have that codified (perhaps apart from the `example_test.go`) this PR adds a test to verify this. This is still slightly (2/1000) flaky, but I think that's good enough to add this in - should be valuable anyway.
See #95 (comment) From that discussion I wasn't sure whether the proposed the initial startup sequence of the limiter - i.e. whether at startup we always block, or always allow. Since we didn't seem to have that codified (perhaps apart from the `example_test.go`) this PR adds a test to verify this. This is still slightly (2/1000) flaky, but I think that's good enough to add this in - should be valuable anyway.
* Add a test verifying initial startup sequence See #95 (comment) From that discussion I wasn't sure whether the proposed the initial startup sequence of the limiter - i.e. whether at startup we always block, or always allow. Since we didn't seem to have that codified (perhaps apart from the `example_test.go`) this PR adds a test to verify this. This is still slightly (2/1000) flaky, but I think that's good enough to add this in - should be valuable anyway. * channels are great
* Add a test verifying initial startup sequence See uber-go#95 (comment) From that discussion I wasn't sure whether the proposed the initial startup sequence of the limiter - i.e. whether at startup we always block, or always allow. Since we didn't seem to have that codified (perhaps apart from the `example_test.go`) this PR adds a test to verify this. This is still slightly (2/1000) flaky, but I think that's good enough to add this in - should be valuable anyway. * channels are great
dc66526
to
b658cc5
Compare
@rabbbit |
b658cc5
to
ef02693
Compare
@rabbbit |
newTimeOfNextPermissionIssue = now | ||
case now-timeOfNextPermissionIssue > int64(t.maxSlack): | ||
case t.maxSlack > 0 && now-timeOfNextPermissionIssue > int64(t.maxSlack): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case t.maxSlack > 0 && now-timeOfNextPermissionIssue > int64(t.maxSlack): | |
now-timeOfNextPermissionIssue > int64(t.maxSlack): |
I think the maxSlack check here is unnecessary - the previous iff should have covered that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, no, this is wrong - > int64(t.maxSlack):
vs > int64(t.perRequest))
.
I clearly need to understand this code more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there can be a case where t.maxSlack == 0
is true, but now-timeOfNextPermissionIssue > int64(t.perRequest)
is false, in that case we will try to evaluate now-timeOfNextPermissionIssue > int64(t.maxSlack)
and it can be true, so we will end up in a wrong branch. So we need t.maxSlack > 0
check here to step into default
branch.
Since this is not actually the "main" limiter yet, I'll merge this in and potentially post-review later on. |
This PR fixes the issue found by @twelsh-aw with int64 based implementation #90
Our tests did not detect this issue, so we have a separate PR #93 that enhances our tests approach to detect potential errors better.