-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rate limit: add local rate limit network filter #9354
Conversation
Allows local rate limit via a token bucket. Signed-off-by: Matt Klein <mklein@lyft.com>
test/extensions/filters/network/local_ratelimit/local_ratelimit_integration_test.cc
Show resolved
Hide resolved
source/extensions/filters/network/local_ratelimit/local_ratelimit.cc
Outdated
Show resolved
Hide resolved
source/extensions/filters/network/local_ratelimit/local_ratelimit.cc
Outdated
Show resolved
Hide resolved
Signed-off-by: Matt Klein <mklein@lyft.com>
You wouldn't underflow because the only place you'd decrement is in the
refill handler. Otherwise, the counter only increases, so you can safely
subtract whatever you observed in the refill handler.
This is all moot if you're going with the CAS loop. I like that approach
better and it's impossible to continuously spin since the token count stops
changing once it hits 0.
…On Sun, Dec 15, 2019, 8:55 AM Matt Klein ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In source/extensions/filters/network/local_ratelimit/local_ratelimit.cc
<#9354 (comment)>:
> + fill_interval_(PROTOBUF_GET_MS_REQUIRED(proto_config.token_bucket(), fill_interval)),
+ enabled_(proto_config.enabled(), runtime),
+ stats_(generateStats(proto_config.stat_prefix(), scope)), tokens_(max_tokens_) {
+ fill_timer_->enableTimer(fill_interval_);
+}
+
+LocalRateLimitStats Config::generateStats(const std::string& prefix, Stats::Scope& scope) {
+ const std::string final_prefix = "local_rate_limit." + prefix;
+ return {ALL_LOCAL_RATE_LIMIT_STATS(POOL_COUNTER_PREFIX(scope, final_prefix))};
+}
+
+void Config::onFillTimer() {
+ uint32_t new_tokens_value;
+ {
+ // TODO(mattklein123): Consider replacing with an atomic CAS loop.
+ absl::MutexLock lock(&mutex_);
Interesting idea. Still, you would need a CAS loop in the fill timer
callback to avoid underflow. I also think that we should be exact on the
rate limit.
TBH the CAS implementation is simple, I can just do it now. I didn't
because I thought people would complain about premature optimization.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#9354?email_source=notifications&email_token=AAIOZ7UPBM7OY5YFYVJ6WX3QYZOQXA5CNFSM4J25PZQ2YY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCPHB5AQ#discussion_r357991563>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIOZ7Q76C6NK7FYF6MUMXTQYZOQXANCNFSM4J25PZQQ>
.
|
Signed-off-by: Matt Klein <mklein@lyft.com>
Yes I suppose you could refill up to what you observe having been used, though it would be very easy to miss a fill window this way. In any case, I updated to a full CAS implementation which does not have this issue. |
Assigning over to @junr03 and @tonya11en for review. @alyssawilk if you feel like taking a look also that would be great but not required if you don't have time. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great overall!
api/envoy/config/filter/network/local_rate_limit/v2alpha/local_rate_limit.proto
Outdated
Show resolved
Hide resolved
docs/root/configuration/listeners/network_filters/local_rate_limit_filter.rst
Outdated
Show resolved
Hide resolved
docs/root/configuration/listeners/network_filters/local_rate_limit_filter.rst
Outdated
Show resolved
Hide resolved
test/extensions/filters/network/local_ratelimit/local_ratelimit_test.cc
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am glad @tonya11en and you went for the CAS loop solution, it is pretty nice. Just a few comments and an ask in the testing code.
docs/root/configuration/listeners/network_filters/local_rate_limit_filter.rst
Show resolved
Hide resolved
docs/root/configuration/listeners/network_filters/local_rate_limit_filter.rst
Outdated
Show resolved
Hide resolved
Signed-off-by: Matt Klein <mklein@lyft.com>
test/extensions/filters/network/local_ratelimit/local_ratelimit_test.cc
Outdated
Show resolved
Hide resolved
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Matt Klein <mklein@lyft.com>
All comments addressed other than additional test comments which I am working on. |
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Matt Klein <mklein@lyft.com>
@tonya11en @junr03 @alyssawilk updated to cover the remaining comments. PTAL. |
api/envoy/config/filter/network/local_rate_limit/v2alpha/local_rate_limit.proto
Outdated
Show resolved
Hide resolved
Note also that I think I have a cleaner way of doing the thread synchronizer stuff so going to update that also. |
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Matt Klein <mklein@lyft.com>
/lgtm api |
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Matt Klein <mklein@lyft.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great; left a few minor comments; feel free to apply in a follow-up if you agree.
Signed-off-by: Matt Klein <mklein@lyft.com>
Signed-off-by: Matt Klein <mklein@lyft.com>
/retest |
🔨 rebuilding |
Signed-off-by: Matt Klein <mklein@lyft.com> Signed-off-by: Prakhar <prakhar_au@yahoo.com>
Allows local rate limit via a token bucket.
Risk Level: None, new feature
Testing: Integration/UT
Docs Changes: Added
Release Notes: Added