Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question][guidance] bulk increment with ttl #315

Closed
xward opened this issue Oct 17, 2023 · 3 comments
Closed

[question][guidance] bulk increment with ttl #315

xward opened this issue Oct 17, 2023 · 3 comments

Comments

@xward
Copy link

xward commented Oct 17, 2023

Hello !

I want to know if there is a way to perform some bulk increment, currently my code does something like this:

Cachex.execute!(@store,
  {:ok, n} = Cachex.incr(@store, :a)
  # new ? set ttl
  if n == 1, do: Cachex.expire(@store, :a, 10_000)

  {:ok, n} = Cachex.incr(@store, :b)
  # new ? set ttl
  if n == 1, do: Cachex.expire(@store, :b, 15_000)

  {:ok, n} = Cachex.incr(@store, :c)
  # new ? set ttl
  if n == 1, do: Cachex.expire(@store, :c, 12_000)
)

If i have 500 of them it hurts a little bit, and if i switch to a Redis to share data across nodes, the performance will be affected too much because of the many request I imagine it would do. I would love to do this in one call.

I'm not even sure my above statement is true, maybe it will perform well with Redis (ie not doing 1000 queries for 500 key to increment with ttl).

The real code sample is here

Best regards,

@whitfin
Copy link
Owner

whitfin commented Nov 15, 2023

@xward hi! Sorry for the slow response, busy month.

I took a look at your code and there's not much you can do about the incr call themselves, however you can speed it up a little bit by re-using a static state:

Cachex.execute(@store, fn cache ->
    Enum.reduce(checkers, %{}, fn {id, key, period, allowed, decision_if_above}, acc ->
        {:ok, n} = Cachex.incr(cache, key)

        # new ? set ttl
        if n == 1, do: Cachex.expire(cache, key, period)

        Map.put(acc, (n > allowed && decision_if_above) || :pass, id)
    end)
end)

This will skip an ETS table lookup on each call to your cache, which actually is most of the overhead involved here. If you want to try this out, you should see a fair amount of speed up - it might be enough for your use case.

Let me know! Happy to discuss further if you need further improvements, maybe we can figure out something in the API that might help.

@xward
Copy link
Author

xward commented Nov 20, 2023

It indeed improved performances, thanks ! ~25% with 500 checkers (cf benchmarks)

I guess there is no way around calling incr/2 for each checkers I have.

@xward xward closed this as completed Nov 20, 2023
@whitfin
Copy link
Owner

whitfin commented Nov 20, 2023

@xward there's a potential that something this might also be a little faster.

I haven't been able to test this, not on my work machine - but hopefully you get the idea. Let me know if it helps!

    Cachex.execute(@store, fn cache ->
      Enum.reduce(checkers, %{}, fn {id, key, period, allowed, decision_if_above}, acc ->
        Cachex.get_and_update(cache, key, fn
          (nil) -> { :commit, 1, ttl: period }
          (val) -> { :commit, val + 1 }
        end)
        Map.put(acc, (n > allowed && decision_if_above) || :pass, id)
      end)
    end)

The idea is that it combines it all into one single cache call, rather than multiple - although you still need to call per key.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants