New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve ConcurrentLruCache performance #24469
Conversation
if (this.queue.size() == this.maxSize) { | ||
|
||
int cacheSize = this.size; | ||
if (cacheSize == this.maxSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recommended to use >= instead of ==, Even if there is a problem with the write lock, it will not cause NOT LIMIT Cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand how is it possible to occur a problem with write lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reordered V value = this.generator.apply(key);
line, To prevent size inconsistency in case generator throws any exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, Understand
- manage collection size manually (size operation of concurrent collections can be heavy) - check cache hit first (size check will be useless if cache miss) - reduce read-lock scope - use `map.get` to test cache instead of `queue.remove` Signed-off-by: Kwangyong Kim <banana.yong@gmail.com>
I did try more improvement. class ImprovedConcurrentLruCache<K, V> {
private final int maxSize;
private final ConcurrentHashMap<K, LruEntry<K, V>> cache = new ConcurrentHashMap<>();
private final Function<K, V> generator;
public ImprovedConcurrentLruCache2(int maxSize, Function<K, V> generator) {
this.maxSize = maxSize;
this.generator = generator;
}
public V get(K key) {
V cached = getValue(key);
if (cached != null) {
return cached;
}
synchronized (this.cache) {
cached = getValue(key);
if (cached != null) {
return cached;
}
if (this.cache.size() == this.maxSize) {
Collection<LruEntry<K, V>> lruEntries = this.cache.values();
LruEntry<K, V> eldestEntry = Collections.min(lruEntries, comparingLong(e -> e.lastAccess));
this.cache.remove(eldestEntry.key);
}
V value = this.generator.apply(key);
this.cache.put(key, new LruEntry<>(key, value));
return value;
}
}
private V getValue(K key) {
LruEntry<K, V> cached = this.cache.get(key);
if (cached == null) {
return null;
}
return cached.getValue(this.cache.size() < this.maxSize / 2);
}
private static class LruEntry<K, V> {
private final K key;
private final V value;
private volatile long lastAccess;
LruEntry(K key, V value) {
this.key = key;
this.value = value;
this.lastAccess = System.nanoTime();
}
V getValue(boolean skipAccess) {
if (skipAccess) {
return this.value;
}
this.lastAccess = System.nanoTime();
return this.value;
}
}
} Any feedback is welcome. Thank you. |
ConcurrentLruCache can be improved.
map.get
to test cache instead ofqueue.remove
And, I think that using ConcurrentLrCache spring widely instead of some synchronized + LinkedHashMap will improve performance.