-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Description
Original issue created by crai...@microsoft.com on 2014-09-23 at 04:57 PM
Presently size-based eviction divides maxWeight by the number of segments (concurrency level) and then each segment checks individually on writes for the need to evict entries.
This leads to under utilization of ram when the size of the data in each segment varies a lot. One can pick a larger maxWeight, of course, but then if the distribution of the sizes of the segments changes, one is in danger of ram really being overcommitted. One might also pick a better hash function, but it is not clear that will solve this consistently.
A better solution is to coordinate eviction across the segments so that the total cache more or less stays below the maxWeight, but allows segments to be larger when they are imbalanced. The additional need is that when one segment is below it's max and yet the cache as a whole is still over weight, one needs to poke at least one other segment to evict.
I have coded a solution to this in https://code.google.com/r/craigwi-guava/. The tests don't pass and so I'm not sure what to do about them. It would make sense to change the eviction unit test to account for this change, but that hasn't been done yet.