In many cases, @Cacheable annotated methods will read a given resource (such as a database table), and @CacheEvict annotated methods will update this resource. Unfortunately, there is a race condition that can lead to outdated cache contents. Consider the following schedule, with thread A reading the resource, thread B updating it:
A: entering @Cacheable
A: reading resource in state 1
B: entering @CacheEvict
B: updating resource to state 2
B: leaving @CacheEvict, invalidating the cache
A: leaving @Cacheable, writing state 1 to the cache
Now the resource is in state 2, the cache in state 1, but marked valid.
It would be great if there was some (optional) synchronization between the two cache annotations, that prevents such a race condition, if possible even in multi-node configurations.
Closely related to #13892, which deals with concurrency issues for multiple calls to the same @Cacheable method. This issue deals with concurrent access across related @Cacheable / @CacheEvict methods.
Discussing this with Mario I think the crucial scenario is slightly different: It's rather that an invocation of an @CacheEvict method essentially has to be considered a "cache value calculating method" as it clears the cache for a particular id at least and thus should block concurrent accesses to cached methods of the same cache and key which would then trigger recalculation of the value.
From my perspective it is a pretty major issue because it means that using the @Cacheable in a multi threaded situation is not reliable. Here is a Scenario where I want to use @Cacheable but can't because of this issue.
A user logs into my system and I need to pull back all the permissions they have, this is a very expensive operation requiring multiple db queries so I want to use @Cacheable to cache it. If a user gets a new permission then I need to I need to invalidate the cache if a user has a permission revoked I need to invalidate the cache. But the problem exposed in this issue means that @Cacheable is not usable in a situation where I don't want to evict an item out of the cache but rather I want to replace it and I can't afford to have stale view of the data. I hope this get fixed soon.
We looked at this thoroughly and discussed internally what we could do at the abstraction level.
A general mechanism that would help synchronizing evict and put operations would probably ruin the performance. Besides, I haven't found a cache provider that offers such feature.
Do you guys have concrete proposals on how to implement this efficiently against specific cache providers? In particular, how to reduce the locking to just the specific cache entry as opposed to blocking access to the entire cache region?