New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache Stampeding #162
Comments
Not really, but I guess the technique of acquiring a lock, even though it sounds a bit orthogonal to the purpose of reactive programming, could be applied at the point where the Do you have any pseudocode (or real code even) to illustrate what you mean with the additional operation and the enabling of putIfAbsent? |
Example of a writer function where its output would feed the value that ends up being returned. Currently Mono.Runnable is used. I have switched this to Mono.Callable with the intent that the caller can re-use this signal rather than the original generated signal. This still doesn't solve the stampede issue.
|
Caffeine has the ability to compute on a get iff the value does not exist. Could something like this be achieved without a block? Maybe a block is to be expected in this case.
|
Thanks. For writers that support atomic operations in the like of |
@dave-fl if the |
For the secondary write with, something like this, you end up guaranteeing that the first to enter is the value shared among all callers, but you still might stampede. I guess the question is, does it make sense to use block in this case of the underlying Cache can guarantee that it will insert iff the value does not exist? Is there a way to make this generic.
|
Generic lookup and write, no stampede.
|
@simonbasle If the concern here is that subsequently the E.g. All elements come in with same key. A5, A4, A3, A2, A1 - Group Same Key A1 in progress A2 cannot go until A1 completes. A2 checks if cache has been populated before proceeding, if it has than return cache. |
@dave-fl the problem here is that there are 2 levels: the one where you obtain a Seems that starts to point us toward a locking mechanism where the In the light of all this, I wonder if the Cache<String, Mono<T>> underlyingCache; //pseudocode for cache
public Mono<T> toBeCached(String key) {
return WebClient.get("/" + key).retrieve().bodyToMono(T.class);
}
public Mono<T> cachedFooCall() {
String key = "foo";
return underlyingCache.computeIfAbsent(key, k -> toBeCached(k).cache());
} |
@smaldini any thought? |
@simonbasle I'm not sure why it was decided to use Signal's instead of the Mono. But this looks a lot more simple than the locking code that I was experimenting with. Just something to point out. If Mono's result in errors or completions rather than next, the user might not want to cache these values and will want to invalidate the key. During that brief period (between when the invalid mono was stored and its key was invalidated) there will be the possibility to get a cached invalid Mono. It might be desirable to automatically switch under these conditions and try to compute the mono again (ignoring the cache). |
Just answering my own question - a valid reason to use a Signal instead of a Mono might be that there is time sensitive information e.g. token that by the time the Mono is subscribed to could become invalid. Using the signal forces the item to have been completed before being inserted. I could see both scenarios being valid. |
@dave-fl see the discussion in #131 (I needed to re-update my brain with that discussion BTW 😄). The base rationale for For positive results only, the approach I described above is good enough (and has the added benefit of better handling stampedes?). |
superseded by #237 |
With the current Cache Mono implementation is there anyway to avoid cache stampeding?
Additionally when the cache is updated by the writer, shouldn’t this value be passed on rather than using original value that is to be inserted from the supplier. This would allow for putIfAbsent.
Perhaps an additional lookup operation needs to be added which can also take a parameter to add a signal iff it does not exist in an atomic operation.
The text was updated successfully, but these errors were encountered: