New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loadAll being called once for each element of getAll #1697
Comments
Exactly same problem for me. Can you advise us about a possible date fixing ? |
I was able to easily reproduce with the following code. We will now look for a fix @Test
public void test() {
CacheLoaderWriter<Integer, String> loaderWriter = new CacheLoaderWriter<Integer, String>() {
@Override
public String load(Integer integer) throws Exception {
return integer.toString();
}
@Override
public Map<Integer, String> loadAll(Iterable<? extends Integer> iterable) {
System.out.println("loadAll called");
return StreamSupport.stream(iterable.spliterator(), false)
.peek(i -> System.out.println("Id: " + i))
.map(e -> new AbstractMap.SimpleImmutableEntry<>(e, e.toString()))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
}
@Override
public void write(Integer integer, String s) {
}
@Override
public void writeAll(Iterable<? extends Map.Entry<? extends Integer, ? extends String>> iterable) {
}
@Override
public void delete(Integer integer) throws Exception {
}
@Override
public void deleteAll(Iterable<? extends Integer> iterable) {
}
};
CacheManagerBuilder cacheManagerBuilder = newCacheManagerBuilder();
try(CacheManager cacheManager = cacheManagerBuilder.build(true)) {
Cache<Integer, String> cache = cacheManager.createCache("test",
CacheConfigurationBuilder.newCacheConfigurationBuilder(Integer.class, String.class,
ResourcePoolsBuilder.heap(10)).withLoaderWriter(loaderWriter).build());
cache.getAll(IntStream.range(1, 10).mapToObj(i -> Integer.valueOf(i)).collect(Collectors.toSet()));
}
} |
Hi Henry, Do you have any clue about how to solve this challenge ? |
I've started to look at it. However, it is on hold right now.
The issue is a bit complicated due to the way Stores are working. I need to
reimplement the getAll entirely.
Do you need a workaround?
…On 16 March 2017 at 10:04, spierre99 ***@***.***> wrote:
Hi Henry,
Do you have any clue about how to solve this challenge ?
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#1697 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABKRM0VntD0OP4FMH45JucrH0l4EvVlQks5rmUFpgaJpZM4LJANn>
.
|
@henri-tremblay Any update on this? Would be interested in a workaround too, if available. |
Still complicated and was put on ice for now. The best workaround I can see is to do the loadAll externally. You load everything you need and the |
Actually this isn't a bug, but rather is a feature… by design. Possibly a non-desired feature though... In the meantime, as I share some responsibility in this design, let me try to explain why the behavior is as such. What the CacheLoader APIs aims at solvingBesides the problems around consistency a cache-through pattern addresses, the loader API enables readers to not have to synchronize around populating the cache. The loader will make sure only a single thread will populate the cache for a given key, while other threads accessing the same key will just block until the value is installed. The idea behind the designThe That's why, in the example above, you only get to have an SimplicityThe current design enables entries being loaded to be made available as quickly as possible. It could be something else makes this impossible currently (I haven't read the code in a long while now), but this enables two threads each populating one of two keys being loaded. t1: cache.getAll(a, b);
t2: cache.get(b); The first thread ( OrderingThings can get somewhat more tricky to deal with as well, imagine the case below: t1: cache.getAll(a, b);
t2: cache.getAll(b, a); These two threads can now deadlock each other. Obviously ordering the lock acquisitions would be the way to solve this, but there is no contraint on the type of Now this can all be mitigated with other locking strategies, like installing some lock object in the mapping, so that So... what?All this to say that the route chosen was to optimize for latency of the cache. I hope that the examples above show how that approach makes access to the cache somewhat fairer to all... While other options exists, they come at a higher price in terms of runtime because of the additional coordination requirements. Now maybe there are better ways of going about this though... I haven't given this any further thoughts for years to be honest. Nonetheless that's a vague attempt at explaining why things are the way they are; what challenges need to be accounted for when trying to "address this issue"; and, most importantly, what tradeoffs they may bring... |
Highly interesting explanation. I'll take that in consideration. The problem is really the loader writer. Doing tons of queries to the database instead of one to load a cache isn't at all efficient. So we will have to find a way. |
Also, note that `loadAll` never was intended to warm the cache up... If
you'd want to do that (effectively), there'd be other means. We used to
have a utility for that (including one that'd snapshot the current hot
keyset at a given interval/shutdown and use that information to reload the
cache up on restart). That _could_ use the same CacheLoaderWriter API.
Anyways, a matter of tradeoffs... I can understand that in case your SoR
enables "batch loads", you may want to leverage that. One idea, thinking
out loud here, is to support both approaches based on config: loader/writer
optimized for either cache or SoR.
In all cases, best of luck :)
|
Any news on this issue? |
When using Ehcache3 (3.1.1) with on heap store, calls to getAll with a key set > 1 on cold cache result in loadAll being called for each key separately. The iterable being passed to loadAll in CacheLoaderWriter always appears to be a size of 1 regardless to the amount of keys in getAll.
Here is my cache configuration:
cache = cacheManager.createCache(name, CacheConfigurationBuilder.newCacheConfigurationBuilder(key, value, ResourcePoolsBuilder.heap(entries)) .withLoaderWriter(loader) .build() );
See comments on stackoverflow
The text was updated successfully, but these errors were encountered: