Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fetch feature request #94

Closed
Lukom opened this issue Dec 20, 2016 · 8 comments
Closed

fetch feature request #94

Lukom opened this issue Dec 20, 2016 · 8 comments

Comments

@Lukom
Copy link

Lukom commented Dec 20, 2016

It would be very helpful to have function like this:

def fetch(cache_name, key, fun) do
  case Cachex.get(cache_name, key) do
    {:ok, value} ->
      value
    {:missing, _} ->
      res = fun.()
      Cachex.set(cache_name, key, res)
      res
  end
end

And use it like this:

Cachex.fetch(:my_cache, "key", fn -> "..." end)
@whitfin
Copy link
Owner

whitfin commented Dec 20, 2016

Hi @Lukom!

I might be misunderstanding, but I think you're asking for fallbacks.

Can you please try this out and see if it does what you need?

Cachex.get(cache_name, key, fallback: fn _ -> fun.() end)

Please let me know if this works for you!

@Lukom
Copy link
Author

Lukom commented Dec 20, 2016

Just tried – it works as I need, thanks!

But I think there are still some issues with this:

  1. Naming. When I see function with name get I don't expect that this function is going to set value.

  2. I think fetch has different meaning than fallback. Eg. it could be possible to do fetch with fallback like this:

    Cachex.start_link(:redis_memory_layer, [ default_ttl: :timer.minutes(5), fallback: &RedisClient.get/1 ])
    Cachex.fetch(:redis_memory_layer, "my_key", fn -> "..." end)

@whitfin
Copy link
Owner

whitfin commented Dec 20, 2016

I disagree.

  1. From a caching perspective, think of it as multiple layers of caches. You get a value and if it's not in the top layer, it gets it from the next layer down (your fallback).

  2. I'm not sure why this example is any different to just doing this:

Cachex.set(:redis_memory_layer, "my_key", "...")

Can you please clarify on the difference there?

@ananthakumaran
Copy link

@zackehh I have couple of questions regarding fallback param. My use case is to cache the response returned by a service.

  1. How are errors handled by fallback? I would not like to cache 503/etc response from the service. My guess is, whatever returned by the fallback method is stored in cache.

  2. How are inflight requests handled. If I make two successive get call for the same key, will the second one wait, or will it result in the fallback being fired twice

@whitfin
Copy link
Owner

whitfin commented Dec 23, 2016

Hi @ananthakumaran!

  1. Error handling should be done in your fallback logic. You are correct that anything returned is stored in the cache, however v2.x introduced a change (which I appear to have forgotten to document) that allows you to return { :ignore, value } which will return value from your cache call, but will not persist it inside the cache. You can use this to safely exit error states.

  2. Both will execute the fallback and the last one in will wait. I considered working on this but put it on hold as this seems the most logical (nomatter how lame it might actually be). The following is my reasoning for not changing this (unless someone can think of an elegant way):

  • Unless your fallback is slow (e.g. a remote call) and it's in a hot code path, it's unlikely to ever see two at once.
  • It's fairly easy to work around:
    -You can have a seeding process which is solely responsible for writing that cache entry on a schedule.
    • Alternatively if you drop your cache call into a transaction, it will also ensure that only one is occurring at once.

Do these answers clarify your questions?

I'm interested in your thoughts on #2; do you think my conclusions are reasonable? The only solution I have at the moment would be to queue the fallback calls - but it would still be last one in wins; I don't believe there's an efficient way to "cancel" the remaining once the key has been set by the first fallback.

@ananthakumaran
Copy link

{:ignore, x} is exactly what I want

Unless your fallback is slow (e.g. a remote call) and it's in a hot code path, it's unlikely to ever see two at once.

I have the exact use case. Also, it might be reasonable to assume the fallback call to make long running remote calls.

It's fairly easy to work around:
-You can have a seeding process which is solely responsible for writing that cache entry on a schedule.
Alternatively if you drop your cache call into a transaction, it will also ensure that only one is occurring at once.

Prefetching might not be possible in my case. Cachex.transaction! might solve my use case (BTW the example in readme should use the reference state instead of worker inside the block). Should have read the README properly. Thanks.

@whitfin
Copy link
Owner

whitfin commented Dec 23, 2016

Aloha @ananthakumaran

Glad to hear it, I have filed #95 to better document the { :ignore, x } syntax in the README. You can currently find some notes on in under https://hexdocs.pm/cachex/Cachex.html#start_link/3 under the :fallback option. I'll also catch that issue with the transaction docs you mentioned in that ticket.

I have also filed #96 to take another look at how fallbacks work to see if it can be made a little less stressful for backend systems. I threw in #97 as a bonus which should also reduce pressure in fallbacks.

Would you mind taking a quick look over both of the latter tickets and dropping your thoughts in them?

@whitfin
Copy link
Owner

whitfin commented Dec 29, 2016

I'm going to close this issue as I feel everything is covered by a) existing features or b) those in the comment above. If anyone has any concerns, please open a new issue specifically for your concern (as this thread got a little off topic).

@whitfin whitfin closed this as completed Dec 29, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants