Use BitFaster.Caching LRU cache for QueryCache; Refactor common method implementations of ICache #159
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@botinko
made following research of LRU Cache performance:
I checked the performance of LRU caches. What has been tested:
There were also 3 types of load
a rather large cache (20,000), a large number of keys (1,000,000), distribution of key frequencies according to the Zipf law. This type of load is likely to be close to reality after the node id is no longer part of the key and the cache size is increased.
small cache (256), a large number of keys (1,000,000), distribution of key frequencies according to the Zipf law. This type of load will be if the node id is no longer part of the key, but the cache size is left at the default.
small cache (256), a large number of keys (1,000,000), distribution of keys is uniform. We have this type of load now (due to the fact that the node id is part of the key and the number of nodes is large)
I also compared the performance in net core 3.1 and 5.
So, there are (332) 18 tests. Each test is executed in 1,2,3...16 threads to have a better understanding of how it works concurrently.
Tested on
Results:
Conclusions:
But in 1-3 thread ConcurrentDictionary is better.
Thus I recommend
BitFaster.Cachingwith default ConcurrentDictionary as a DataObjectsLRU replacement as QueryCache.This replacement will be useful even without removing node id from the query key. This will solve the problem of convoy lock in Query Cache. See the difference: