-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recursive cache loading never terminates #89
Comments
Unfortunately a different map implementation would either be deadlock prone, use non-atomic computes, or a single exclusive lock. There would be the case of (A) => (B) and (B) => (A), so at best an ordered set of computations might complete. As noted in your link, I worked with Doug to detect and fail fast because its an easy mistake to make. The cache's JavaDoc do warn that recursive computations are prohibited. A workaround for ordered computations is to use |
As you pointed out, this kind of thing only really makes sense for ordered computations, such as a tree structure that shares subtrees. I didn't spot the JavaDoc, it could be worth adding your comments about ordered computations there to help point anyone else that stumbles across this kind of problem in the right direction. Your tip about using |
Yes, it should be deadlock free when ordered. In the regular case, an unordered computation might deadlock due to the computation. The ordered case would also be prone due to the hash table resizing during the operations. The latter case is why In the asynchronous case the map operations are immediate with no dependencies. The futures may deadlock on each other when unordered. But if ordered then it is a linear dependency chain and shouldn't cause a problem. The |
Great, thanks for your comments, it's nice to see a library maintained by someone who clearly knows their domain very well! |
I just encountered this problem when looking for a replacement because of google/guava#1881 |
I agree it shouldn't result in an infinite loop, but it is not something I can fix locally without a penalty. It is a bug within I brought this issue up and had it fixed in the 2011 preview and, after discovering it regressed, worked with Doug to fix it in 2014. Unfortunately that was never brought into a Java 8 release, so we have to wait for 9. Until then I'd need to embed the corrected hash table, which I've been holding off on. (Note that I also fixed recursion in Guava's loading and reported it as regressed in their new compute() support). All of the computing methods are documented, e.g. CacheLoader states, <b>Warning:</b> loading <b>must not</b> attempt to update any mappings of this cache directly. Yes, an unbounded pool is ideal for async recursion. It relies on decoupling the map operation from the computation so that the future is inserted and then executed, rather than executed during the insert. This means that a caller-runs policy cannot be used (e.g. same-thread executor), which is ensured by an unbounded pool. |
This reverts commit 00fc15a. With Caffeine, accessing the cache from the cache loader is causing an infinite loop [1]. On stable-2.14, the only circular dependency was removed but it looks like there is another one on stable-2.15. Revert the change until circular dependency is removed or a solution is found to access the cache from the cache loader. The test going into an infinite loop is RevisionDiffIT.rebaseHunksOneLineApartFromRegularHunkAreIdentified. [1]ben-manes/caffeine#89 Bug: Issue 8464 Change-Id: If65560b4a9bfcf0a03decaedd83ad000c6b28f4f
This reverts commit 00fc15a. With Caffeine, accessing the cache from the cache loader is causing an infinite loop [1]. Merging that change up to stable-2.15 exposed that there is still at least one cache loader (PatchListLoader) accessing its cache (PatchListCacheImpl). The test going into an infinite loop on stable-2.15 is RevisionDiffIT.rebaseHunksOneLineApartFromRegularHunkAreIdentified. Revert the change until circular dependency is removed or a solution is found to access the cache from the cache loader. [1]ben-manes/caffeine#89 Bug: Issue 8464 Change-Id: If65560b4a9bfcf0a03decaedd83ad000c6b28f4f
Caffeine seems to exhibit the same kind of behaviour as described in this post, I guess because it uses
ConcurrentHashMap
under the hood.Is there any way to specify an alternative backing map other than
ConcurrentHashMap
to work around this?The text was updated successfully, but these errors were encountered: