-
-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve cache logic for prefetch #51
Comments
I don't think it works that way. To me |
Probably me & @Kirill89 did not understood right what happens. Could you explain this 2 lines https://github.com/medikoo/memoizee/blob/v0.3.9/ext/max-age.js#L49-L50 ? It looks like when prefetch starts, it immedeately drops cache data with |
This is called after value being accessed, and To prefetch value we call again memoized function, however this can't work properly without previously deleting cached value (if we won't delete it, memoized function will returned cache value). It's indeed as I look now, not ideal, as when we ask for a result when new value is being retrieved, then instead of returning old value, result will be postponed until refreshed value arrives. So it's as you say, sorry first I misunderstood I thought you say in general values are dropped right after being memoized. Right fix for that would be, to not call memoized function but underlying function, and replace result after having it. This unfortunately is not that straightforward with various extensions (like e.g. resolvers), or different method of obtaining results (e.g. async, promise), Still should be addressed. |
Hey Medikoo, I encountered the need for the enhancement stated here. My use-case: |
@ilaif it's unfortunately not easy to address with current internal design, which doesn't really allow to refresh the value without invaliding the cache upfront, and at same time make this operation not problematic and transparent to any eventual extensions to which internals try to remain agnostic. I plan to refactor the internals so handling cache that way is possible and straightforward (should happen in Q1 of next year). otherwise, trying to implement some workaround on existing implementation, while probably possible, for sure is challenging and may rise few head-aches. Anyway I'm open for PR's :) |
Current prefetch implementation drops cache value immediately after start. That's not correct behaviour for big loads. Cache should be dropped only if it's timed out, and should not be affected by internal failures.
Summary:
Potential problem:
If prefetch fails, it will be called again on next memoizee call (no throttling). That can be considered as acceptable behaviour, because:
The text was updated successfully, but these errors were encountered: