Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ConcurrentLruCache performance #24469

Closed
wants to merge 1 commit into from
Closed

Improve ConcurrentLruCache performance #24469

wants to merge 1 commit into from

Conversation

bananayong
Copy link
Contributor

ConcurrentLruCache can be improved.

  • manage collection size manually (size operation of concurrent collections can be heavy)
  • check cache hit first (size check will be useless if cache miss)
  • reduce read-lock scope
  • use map.get to test cache instead of queue.remove

And, I think that using ConcurrentLrCache spring widely instead of some synchronized + LinkedHashMap will improve performance.

@spring-projects-issues spring-projects-issues added the status: waiting-for-triage An issue we've not yet triaged or decided on label Feb 3, 2020
@rstoyanchev rstoyanchev added the in: web Issues in web modules (web, webmvc, webflux, websocket) label Mar 13, 2020
@bclozel bclozel self-assigned this Mar 13, 2020
if (this.queue.size() == this.maxSize) {

int cacheSize = this.size;
if (cacheSize == this.maxSize) {
Copy link

@Percy0601 Percy0601 Mar 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommended to use >= instead of ==, Even if there is a problem with the write lock, it will not cause NOT LIMIT Cache

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how is it possible to occur a problem with write lock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reordered V value = this.generator.apply(key); line, To prevent size inconsistency in case generator throws any exception.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, Understand

- manage collection size manually (size operation of concurrent collections can be heavy)
- check cache hit first (size check will be useless if cache miss)
- reduce read-lock scope
- use `map.get` to test cache instead of `queue.remove`

Signed-off-by: Kwangyong Kim <banana.yong@gmail.com>
@bananayong
Copy link
Contributor Author

I did try more improvement.
If anyone is interested, ping me please. I will create a commit.

class ImprovedConcurrentLruCache<K, V> {

    private final int maxSize;

    private final ConcurrentHashMap<K, LruEntry<K, V>> cache = new ConcurrentHashMap<>();

    private final Function<K, V> generator;

    public ImprovedConcurrentLruCache2(int maxSize, Function<K, V> generator) {
        this.maxSize = maxSize;
        this.generator = generator;
    }

    public V get(K key) {
        V cached = getValue(key);
        if (cached != null) {
            return cached;
        }

        synchronized (this.cache) {
            cached = getValue(key);
            if (cached != null) {
                return cached;
            }

            if (this.cache.size() == this.maxSize) {
                Collection<LruEntry<K, V>> lruEntries = this.cache.values();
                LruEntry<K, V> eldestEntry = Collections.min(lruEntries, comparingLong(e -> e.lastAccess));
                this.cache.remove(eldestEntry.key);
            }

            V value = this.generator.apply(key);
            this.cache.put(key, new LruEntry<>(key, value));

            return value;
        }
    }

    private V getValue(K key) {
        LruEntry<K, V> cached = this.cache.get(key);
        if (cached == null) {
            return null;
        }

        return cached.getValue(this.cache.size() < this.maxSize / 2);
    }

    private static class LruEntry<K, V> {
        private final K key;
        private final V value;
        private volatile long lastAccess;

        LruEntry(K key, V value) {
            this.key = key;
            this.value = value;
            this.lastAccess = System.nanoTime();
        }

        V getValue(boolean skipAccess) {
            if (skipAccess) {
                return this.value;
            }

            this.lastAccess = System.nanoTime();
            return this.value;
        }

    }
}

Any feedback is welcome. Thank you.

@bclozel bclozel closed this in 713a112 Mar 22, 2020
@bclozel bclozel added type: enhancement A general enhancement and removed status: waiting-for-triage An issue we've not yet triaged or decided on labels Mar 22, 2020
@bclozel bclozel added this to the 5.2.5 milestone Mar 22, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in: web Issues in web modules (web, webmvc, webflux, websocket) type: enhancement A general enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants