New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove the event cache initial size and refactor putInternal #4698
Conversation
e19feac
to
64c40ae
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a thought: this whole thing is just to initialize the size for some long-lived maps. It seems to me the performance impact should be negligible at best, probably not even measurable, since the maps will quickly grow to their eventual size and stay around there. Why don't we just get rid of the whole thing and let Go manage the map size?
temporarily closed this PR, we will revisit the history cache in the future. |
011e3a7
to
d87805f
Compare
@@ -392,3 +420,116 @@ func TestCache_ItemHasCacheSizeDefined(t *testing.T) { | |||
|
|||
endWG.Wait() | |||
} | |||
|
|||
func TestCache_ItemHasCacheSizeDefined_PutWithNewKeys(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added more tests below for PutWithNewKey/PutWithSameKey and PutIfNotExistWithNewKey/PutIfNotExistWithSameKey below
afa90c2
to
4406ef3
Compare
@@ -108,9 +108,8 @@ func NewEventsBlobCache( | |||
) *XDCCacheImpl { | |||
return &XDCCacheImpl{ | |||
cache: cache.New(util.Max(xdcMinCacheSize, maxBytes), &cache.Options{ | |||
InitialCapacity: xdcMinCacheSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
xdc use same default value for cache init and max
@@ -152,7 +152,7 @@ func New(maxSize int, opts *Options) Cache { | |||
|
|||
return &lru{ | |||
byAccess: list.New(), | |||
byKey: make(map[interface{}]*list.Element, opts.InitialCapacity), | |||
byKey: make(map[interface{}]*list.Element), | |||
ttl: opts.TTL, | |||
maxSize: maxSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not for this PR, but if I were writing code using this, I'd want to control both max total size and max element size separately, so that I can prevent one element from using the entire cache space. just an idea for the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree, I think is future improvement for the cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, just two tiny possible simplifications
eb9f9ee
to
f368b5c
Compare
What changed?
remove the cache initial size from lru cache and related configs. also refactor putInternal to better support different scenarios.
and added more unittests for cache size.
Why?
in #4621 I change the history cache to use item bytes instead item count. However it accidentally touched default number of key when cache initialized
How did you test it?
unittest
Potential risks
Is hotfix candidate?