-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide the possibility to invalidate a cached Uni #231
Comments
Interesting, would you have a small snippet that would benefit from the feature? Also how about |
Yeah it's all about naming ;) My concret use-case is fetching JWT tokens from an auth-service. |
Interesting |
@heubeck we're more or less all in vacations or going to vacations, so don't be surprised to not hear back anything for the next 2-3 weeks. Meanwhile if you have time and fancy looking at what's under the cover, feel-free to hack and possibly open a pull-request 😉 |
Already thought about doing this - hope to free some time. Have a good time and thank you @jponge . |
@jponge suggested
wouldn't that pollute the Uni API?
If you do not want to enhance the caching stuff further, than I'm happy with Thank you for reading my thoughts, looking forward to implement a proposal. |
I like the idea of having a cache group.
`asLong` should be `atMost` to be homogeneous and receive a Duration.
We could still keep the current cache method which would redirect on
cache().indefinitely().
Le sam. 1 août 2020 à 00:21, Florian Heubeck <notifications@github.com> a
écrit :
… @jponge <https://github.com/jponge> suggested
Interesting, would you have a small snippet that would benefit from the
feature?
Also how about cacheWhile(Predicate) as an alternative name?
wouldn't that pollute the Uni API?
If breaking changes are accepted I could try to refactor to a cache config
similar you provide for most other API branches (please help me name it):
// current behaviour would become:
Uni.cache().indefinitely()
// Positive cache validity checking:
Uni<T>.cache().asLong(Predicate<T>)
// The other was around:
Uni<T>.cache().until(Predicate<T>)
If you do not want to enhance the caching stuff further, than I'm happy
with Uni.cacheWhile(Predicate) side by side with Uni.cache().
Thank you for reading my thoughts, looking forward to implement a proposal.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#231 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADCG7JDGXHJB2SLW5AREITR6M7VBANCNFSM4PG5E2SQ>
.
|
Thank you @cescoffier. Currently I have no idea how to provide a |
Looking into it, there is indeed an opportunity to explore another design. It's a more complex state machine if re-subscription happens. It's on my radar. |
I've kept your PR open as draft for reference @heubeck I've been exploring some designs with Thinking out loud. On the other hand I'm wondering if we are not trying to ask too much for what |
@jponge Don't think of it as "caching". A cache is a complex beast with explicit invalidation and timeouts and LRU eviction, etc. Think of it as memoization. |
@gavinking spot on, memoization is a more appropriate term here. |
@gavinking Out of curiosity do you have examples where this would be useful in your usage of |
@jponge I have not found a use for it yet, but remember that so far we have not progressed much beyond just writing simple test cases for HR. |
@gavinking thank you for introducing a better term. I'd like to remind to my original case raising that invalidation requirement: But I'm also with @jponge , maybe that's over-engineering within Uni. I'd liked the idea of handling simple caching cases directly, but if it doesn't fit in the design, it can be left out and handled by an application itself. Please do not waste time in there if you are not confident about its usefulness. I had fun trying to solve it, but there surely will come up more handy challenges to implement. |
@heubeck I'm still willing to explore and see if we can come up with a good solution here. Right now |
@jponge just a little idea: If solving the race conditions just for this case is too complicated or courses high synchronization overhead it's propably not worth it. |
@heubeck I pushed an early sketch / draft in #313 It provides Note that in this design invalidation is checked on new subscriptions, it looks quite sane to reason about IMHO. If we start also checking when and item or failure is ready to be propagated downstream then we may have starvation / infinite loops, especially in the |
Hello Mutiny team,
it would be great if an Uni created by
Uni.cache()
could be invalidated to cause a resubscription on the original Uni.I can imagine an overloaded version
Uni.cache(Predicate<T> isValid)
that is called on every subscription on the cached Uni and as soon as it getsfalse
the cached item is thrown away in favour to compute a new one.Thank you and keep on making our JVM world reactive 🚀
The text was updated successfully, but these errors were encountered: