Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide means to configure multithreaded access for cache abstraction [SPR-9254] #13892

Closed
spring-projects-issues opened this issue Mar 21, 2012 · 10 comments
Assignees
Labels
has: votes-jira Issues migrated from JIRA with more than 10 votes at the time of import in: core Issues in core modules (aop, beans, core, context, expression) type: enhancement A general enhancement
Milestone

Comments

@spring-projects-issues
Copy link
Collaborator

spring-projects-issues commented Mar 21, 2012

Oliver Drotbohm opened SPR-9254 and commented

Imagine the following usage of the cache abstraction:

class MyComponent implements Component {

  @Cacheable("myCache")
  public Object myMethod(String key) {

  }
}

Now assume the execution of myMethod(...) takes a huge amount of time (hence the caching in the first place). Now futher assume we access the method concurrently with two threads. With the current implementation of the cache interceptor works as follows:

  • thread 1 triggers the interceptor, we don't find a value for the key in the cache and thus invoke the method.
  • Now while the method invocation in thread 1 runs, thread 2 enters the interceptor with the same key, we don't find a value and re-trigger the method call

This is unfortunate as the second invocation is entirely obsolete as the second thread could just wait for the first invocation to succeed and use the value returned. Beyond that we might face this scenario in extended version as while the first thread calculates the value pretty much all threads invoking the method with the same key would re-trigger the calculation. So the longer the value calculation takes the more likely we will run into unnecessary duplicate invocations of the method.

So what I am suggesting is that we should keep track of actual method invocations by key and block all concurrent accesses to that cache value until the first thread calculating it has finished and let the other threads use that value then. The CacheBuilder API of the Google Guava project actually implements this out of the box.


Affects: 3.1.1

Issue Links:

Referenced from: commits spring-projects/spring-data-redis@1f97623, spring-projects/spring-data-redis@ceccafb, spring-projects/spring-data-redis@5cc5da9, spring-projects/spring-data-redis@a03c444

17 votes, 25 watchers

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Mar 26, 2012

Chris Beams commented

Ollie,

This makes a lot of sense. @Costin (now cc'd) - is this something you considered when developing the first cut of the caching work? Just asking in case there's anything we should watch out for that you might have already worked through, or perhaps there's a reason this wasn't initially implemented.

Slating for 3.1 M1 as this change will generally align well with the JCache (0.5) support work that's already planned there (#13417).

@Juergen (also cc'd) - feel free to reassign this to yourself since you're doing the JCache integration anyway.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Feb 7, 2013

Juergen Hoeller commented

From my perspective, @Cacheable's current behavior is a direct equivalent of the typical get-if-not-found-then-proceed-and-put-eventually code blocks found with programmatic cache interaction, so I don't think there is anything fundamentally wrong with it. I've seen plenty of caching code in the wild that works exactly like this.

That said, blocking until a value has been calculated would be a worthwhile option, trading off extended locks (and lock checks, i.e. additional access to shared state) over recalculation of the return value. For the time being, I'm putting this into the 4.0 backlog, with further research to happen before making an eventual decision.

Note that this is a tradeoff since locking for an extended period of time, spanning resource operations, is usually to be avoided at any cost. For example, the resource operation could hang for the initial invocation for any reason, while the same operation could potentially immediately succeed for another thread - depending on which connection they got or any other subtle parameters of the original request.

Juergen

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Jun 26, 2014

Stéphane Nicoll commented

The problem with this issue is that we are targeting the cache abstraction here and locking in the abstraction sounds like a really bad idea. CacheInterceptor has an extended set of features and bringing locks in there is far from ideal.

That being said, Ollie has a point: Guava supports this feature but we can't use it because the way the Guava cache is currently used within the abstraction. I am now looking into this, checking if we could update our own abstraction to be able to benefit from this feature when Guava is used.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Jul 1, 2014

Stéphane Nicoll commented

4.1 RC is around the corner and I am afraid this issue still needs more thinking. It looks like the JSR-107 implementation would easily benefit from a Cache.get(key, Callable) construct but our own CachingInterceptor support more use cases and would be harder to port. Still, this remains an important caching feature we'll probably revisit for 4.2.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Mar 3, 2015

drekbour commented

Quote from #16165:

How do you treat @Cachable#unless while using the blockingCache?
If a key is miss and the result is not needed to put. The other thread will be deadlocked.

We have a common usecase
@Cacheable(value = "myCache", unless = "#result==null")
We wanted the blocking behaviour described herein and tried with Eh's BlockingCache but it doesn't work.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Mar 3, 2015

Stéphane Nicoll commented

No, it won't. I think there's a fairly lengthy conversation in this issue that explains why such corner cases will not work. BlockingCache is very specific on its own and supporting it behind the scenes in Spring's abstraction is not going to happen it its current form.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Dec 20, 2015

Ben Manes commented

Perhaps with 5.0 targeting Java 8 the complexity of this feature will diminish. The Cache interface could be updated with a default computeIfAbsent method, similar to the Collection Framework's. This would then be an optional feature of the implementation instead of placing the burden on the abstraction. This would definitely be the case for ConcurrentMapCache by using ConcurrentHashMap's atomic compute support.

Many cache implementations perform a pre-screening to minimize lock contention so that a hit has similar performance to a read. Guava, Caffeine, and Ehcache 2's BlockingCache do this (and probably the others, but haven't checked). Ehcache 3 does not and JCache cannot due to the contract of invoke(EntryProcesor). ConcurrentHashMap does not due to pre-screening not being a net win on 32+ core machines, but its a recommended optimization to consider when using as a cache.

Oddly JCache's @CacheResult is contractually required to not be atomic and must use the racy get-compute-put approach. Its not entirely clear from the public discussions why this hardline view was taken.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Dec 21, 2015

Stéphane Nicoll commented

Ben, that is already implemented and about to be merged in 4.3. For the record, ehcache 2 has programmatic locking support that we use so that you are not forced to use BlockingCache.

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Dec 21, 2015

Stéphane Nicoll commented

This is now implemented by a sync attribute to opt-in on the @Cacheable annotation. Behind the scenes, a new operation on Cache has been added to provide those semantics.

That new operation is implemented for our supported cache stores:

@spring-projects-issues
Copy link
Collaborator Author

spring-projects-issues commented Dec 22, 2015

Stéphane Nicoll commented

PR for Hazelcast has been submitted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
has: votes-jira Issues migrated from JIRA with more than 10 votes at the time of import in: core Issues in core modules (aop, beans, core, context, expression) type: enhancement A general enhancement
Projects
None yet
Development

No branches or pull requests

2 participants