Skip to content

[Enhancement] better release logic in LRU cache #18047

@Hastyshell

Description

@Hastyshell

Search before asking

  • I had searched in the issues and found no similar issues.

Description

The current release behavior will throw away the item if cache usage is larger than its capacity, which is opposite of LRU policy. That may result in hot data being phased out when the cache is full.

void LRUCache::release(Cache::Handle* handle) {
    ...
    LRUHandle* e = reinterpret_cast<LRUHandle*>(handle);
    {
        std::lock_guard l(_mutex);
        last_ref = _unref(e);
        if (last_ref) {
            _usage -= e->total_size;
        } else if (e->in_cache && e->refs == 1) {
            if (_usage > _capacity) {
                // throw away the just used item directly here when cache is full
                // I think it disobeys the LRU policy
                bool removed = _table.remove(e);
                DCHECK(removed);
                e->in_cache = false;
                _unref(e);
                _usage -= e->total_size;
                last_ref = true;
            } else {
                ...
            }
        }
    }
   ...
    // free handle out of mutex
    if (last_ref) {
        e->free();
    }
}

Solution

Instead, call _evict_from_lru to start a LRU eviction, or just ignore the small overflow in a dynamic-threshold way. Anyway, we should just keep that hot item to be last reached in LRU list.

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions