-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question][guidance] bulk increment with ttl #315
Comments
@xward hi! Sorry for the slow response, busy month. I took a look at your code and there's not much you can do about the Cachex.execute(@store, fn cache ->
Enum.reduce(checkers, %{}, fn {id, key, period, allowed, decision_if_above}, acc ->
{:ok, n} = Cachex.incr(cache, key)
# new ? set ttl
if n == 1, do: Cachex.expire(cache, key, period)
Map.put(acc, (n > allowed && decision_if_above) || :pass, id)
end)
end) This will skip an ETS table lookup on each call to your cache, which actually is most of the overhead involved here. If you want to try this out, you should see a fair amount of speed up - it might be enough for your use case. Let me know! Happy to discuss further if you need further improvements, maybe we can figure out something in the API that might help. |
It indeed improved performances, thanks ! ~25% with 500 checkers (cf benchmarks) I guess there is no way around calling incr/2 for each checkers I have. |
@xward there's a potential that something this might also be a little faster. I haven't been able to test this, not on my work machine - but hopefully you get the idea. Let me know if it helps! Cachex.execute(@store, fn cache ->
Enum.reduce(checkers, %{}, fn {id, key, period, allowed, decision_if_above}, acc ->
Cachex.get_and_update(cache, key, fn
(nil) -> { :commit, 1, ttl: period }
(val) -> { :commit, val + 1 }
end)
Map.put(acc, (n > allowed && decision_if_above) || :pass, id)
end)
end) The idea is that it combines it all into one single cache call, rather than multiple - although you still need to call per key. |
Hello !
I want to know if there is a way to perform some bulk increment, currently my code does something like this:
If i have 500 of them it hurts a little bit, and if i switch to a Redis to share data across nodes, the performance will be affected too much because of the many request I imagine it would do. I would love to do this in one call.
I'm not even sure my above statement is true, maybe it will perform well with Redis (ie not doing 1000 queries for 500 key to increment with ttl).
The real code sample is here
Best regards,
The text was updated successfully, but these errors were encountered: