-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditional/Delayed Eviction #147
Comments
Hi. Sorry for the late reply, and thank you for the request. Your idea seems feasible to me. Before we talk about your idea, let me explain how Moka cache works today. When cache's capacity is exceeded, there are two timings when entry can be evicted:
Please see the TinyLFU diagram in this section of the wiki page. TinyLFU policy may not work very well for certain access patterns. I wonder if you are hitting this problem. The access pattern is like the followings:
While the cache has enough capacity, everything will be okay because new entries will be admitted anyway. After the admission, they will be accessed so they will build up popularity. However, once the cache becomes full, this will become a problem. New entries will not be admitted because their keys were never accessed before; they are less popular than the old ones. They will be evicted immediately after the insertion. In the future, we will upgrade TinyLFU to W-TinyLFU, which has an admission LRU window in front of the LFU filter. W-TinyLFU will work better for the access pattern above because the LRU window will help the new entries to build up the popularity. If you are hitting this problem, W-TinyLFU will mitigate it. W-TinyLFU will work automatically for all users; no need to set the callback. So I wonder which one we should implement first: W-TinyLFU? or your conditional eviction idea? One could argue that W-TinyLFU still does not guarantee that an entry is not evicted while it is alive. (e.g. the entry has very long life) If we implement your idea, we could add two pending eviction queues to the cache: one for 1., and another for 2. If the callback returned Also, there are other cases when entry will be removed from a cache, so we need to decide what to do for such cases:
|
Hi! First, thanks so much for building moka! I recently encountered a situation where, when storing large cached values in Arc, the cache would evict an entry while it was still shared, so getting it again would result in a cache miss, despite that object still being alive elsewhere. A rough idea I had for countering this is to allow setting a conditional eviction callback in the cache builder with the signature
Fn(&K, &T) -> bool
. If an entry is set for eviction, the callback is invoked, and if false is returned that entry is kept in the cache. (In the common Arc case, this would be|_, v| 1 < Arc::strong_count(v)
). Entries that skip eviction could be added to a separate list that is revisited each time an item is evicted from the cache, with the potential to make it adaptive (ex. items that are skipped repeatedly aren't checked with every iteration, saving the potential overhead for long-lived entries).This is all a very rough sketch, but it could be helpful in high-performance scenarios.
The text was updated successfully, but these errors were encountered: