-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Closed
Description
Search before asking
- I had searched in the issues and found no similar issues.
Description
The current release behavior will throw away the item if cache usage is larger than its capacity, which is opposite of LRU policy. That may result in hot data being phased out when the cache is full.
void LRUCache::release(Cache::Handle* handle) {
...
LRUHandle* e = reinterpret_cast<LRUHandle*>(handle);
{
std::lock_guard l(_mutex);
last_ref = _unref(e);
if (last_ref) {
_usage -= e->total_size;
} else if (e->in_cache && e->refs == 1) {
if (_usage > _capacity) {
// throw away the just used item directly here when cache is full
// I think it disobeys the LRU policy
bool removed = _table.remove(e);
DCHECK(removed);
e->in_cache = false;
_unref(e);
_usage -= e->total_size;
last_ref = true;
} else {
...
}
}
}
...
// free handle out of mutex
if (last_ref) {
e->free();
}
}Solution
Instead, call _evict_from_lru to start a LRU eviction, or just ignore the small overflow in a dynamic-threshold way. Anyway, we should just keep that hot item to be last reached in LRU list.
Are you willing to submit PR?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project's Code of Conduct
Metadata
Metadata
Assignees
Labels
No labels