New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor ActionController::RateLimiting
to use AS::Cache
#50781
Conversation
Make sure they all increment the counter but don't update the TTL.
Given that the limiter implementation provided by Kredis is a simple increment with a limit, all `ActiveSupport::Cache` already provide that same capability, with a wide range of backing stores, and not just Redis. This even allow to use SolidCache has a backend if you so desire. If we feel particularly fancy, we could also accept a more generic limiter interface to better allow users to swap the implementation for better algorithms such as leaky-bucket etc.
2a3701b
to
d839ddb
Compare
Beautiful! 👌 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! ❤️
# Rate limiting relies on a backing <tt>ActiveSupport::Cache</tt> store and defaults to <tt>config.action_controller.cache_store</tt>, which | ||
# itself default to the global `config.cache_store`. If you don't want to store rate limits in the same datastore than your general caches | ||
# you can pass a custom store in the <tt>store</tt> parameter. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Rate limiting relies on a backing <tt>ActiveSupport::Cache</tt> store and defaults to <tt>config.action_controller.cache_store</tt>, which | |
# itself default to the global `config.cache_store`. If you don't want to store rate limits in the same datastore than your general caches | |
# you can pass a custom store in the <tt>store</tt> parameter. | |
# Rate limiting relies on a backing ActiveSupport::Cache store and defaults to <tt>config.action_controller.cache_store</tt>, which | |
# itself defaults to the global <tt>config.cache_store</tt>. If you don't want to store rate limits in the same datastore as your general caches, | |
# you can pass a custom store in the <tt>store</tt> parameter. |
|
||
class RateLimitedController < ActionController::Base | ||
self.cache_store = ActiveSupport::Cache::MemoryStore.new | ||
rate_limit to: 2, within: 2.seconds, by: -> { Thread.current[:redis_test_seggregation] }, only: :limited_to_two |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're no longer using a shared store, I think we can drop :by
here and on line 13 (and the assignment in setup
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine to test the by:
behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I can see, it's not actually testing the :by
behavior. These args were added in 0868698 simply to prevent flaky CI. Also, by specifying :by
here, we do not exercise the default value of :by
.
file_name = normalize_key(name, options) | ||
options = merged_options(options) | ||
key = normalize_key(name, options) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can reuse file_name
here. Or perhaps better: assign file_name = key
afterward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hum, no because we use both: lock_file(file_name)
, read_entry(key, ...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By "reuse" I meant set key = file_name
because they are both normalize_key(name, options)
(and normalize_key
will handle merge_options
internally).
But I actually think it would be nicer to set file_name = key
afterward:
options = merged_options(options)
key = normalize_key(name, options)
version = normalize_version(name, options)
file_name = key
amount = Integer(amount)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦 OK, I think I'm tired tonight. I'll set a reminder to clean this up tomorrow because clearly I'm done for today.
Followup: rails#50781 Some small issues I failed to address in the original PR.
Problem with this in testing is that we default to |
Right. Didn't think of that because we tend to have properly configured cache in test. In your specific case, since you wanted Redis anyway, you can just explicitly pass it a Redis store. But your point is valid for the broader community. My first answer would be that we could detect when it's I'll think about it and try to come up with something. |
Yeah, I'm almost thinking whether we should swap that default cache backend for something like MemoryStore and then just reset it instead? As the default? |
I think it's a better default yes. I always setup cache in test because I also always have code depending on it in some way. But it does indeed require a call to So this might open some kind of can of worm, but would probably be an improvement. |
I think we should default to MemoryStore, but make it a dedicated test instance, and then also ensure #clear is called by default. |
Given that the limiter implementation provided by Kredis is a simple increment with a limit, all
ActiveSupport::Cache
already provide that same capability, with a wide range of backing stores, and not just Redis.This even allow to use SolidCache has a backend if you so desire.
If we feel particularly fancy, we could also accept a more generic limiter interface to better allow users to swap the implementation for better algorithms such as leaky-bucket etc.
FYI @dhh @rafaelfranca