Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HardMaxCacheSize exceeded #139

Closed
tetafro opened this issue Jun 17, 2019 · 11 comments
Closed

HardMaxCacheSize exceeded #139

tetafro opened this issue Jun 17, 2019 · 11 comments
Assignees

Comments

@tetafro
Copy link

tetafro commented Jun 17, 2019

Sorry if this is a complete duplicate of #18 , but that issue is 3 years old. And I see the same problem.

I even took the code from that issue. Here is my output:

$ go run .
2019/06/17 18:41:17 profile: memory profiling enabled (rate 4096), mem.pprof
Number of entries: 200000Alloc:      0 MB
Alloc:     46 MB
Alloc:     55 MB
Alloc:     78 MB
Alloc:     61 MB
Alloc:     87 MB
Alloc:     70 MB
Alloc:     50 MB
Alloc:     74 MB
Alloc:     57 MB
Alloc:     83 MB
2019/06/17 18:41:20 profile: memory profiling disabled, mem.pprof
$ go tool pprof mem.pprof
Type: inuse_space
Time: Jun 17, 2019 at 6:41pm (MSK)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 34.85MB, 99.33% of 35.09MB total
Dropped 13 nodes (cum <= 0.18MB)
      flat  flat%   sum%        cum   cum%
   31.56MB 89.96% 89.96%    31.56MB 89.96%  github.com/allegro/bigcache/queue.NewBytesQueue
    3.29MB  9.37% 99.33%    34.86MB 99.36%  github.com/allegro/bigcache.initNewShard
         0     0% 99.33%    34.86MB 99.36%  github.com/allegro/bigcache.NewBigCache
         0     0% 99.33%    34.86MB 99.36%  github.com/allegro/bigcache.newBigCache
         0     0% 99.33%    35.08MB   100%  main.main
         0     0% 99.33%    35.08MB   100%  runtime.main

Quite unexpected for HardMaxCacheSize: 1.

@siennathesane
Copy link
Collaborator

I think it's a config bug, both in your code and in ours. Things are removed from the cache (after eviction) at the CleanWindow frequency, which isn't set in either your code or and we don't provide a default. Looking at this example, if you don't set a value for time.Duration, the default is 0. We document that having a CleanWindow <= 0 means nothing happens. Because nothing is actually getting removed from the cache, only evicted, the memory usage will grow based off keys inserted.

@janisz or @cristaloleg, can you confirm that is indeed the case?

@tetafro
Copy link
Author

tetafro commented Jun 17, 2019

Yeah, sorry, I've missed CleanWindow in config. I fixed it:

config := Config{
	Shards:             256,
	CleanWindow:        1 * time.Second,
	LifeWindow:         10 * time.Second,
	MaxEntriesInWindow: entries,
	MaxEntrySize:       2 * valueSize,
	Verbose:            true,
	HardMaxCacheSize:   1,
}

and set repeats = 100, which makes code run longer than CleanWindow. But still I get

$ time go run .
2019/06/17 20:04:54 profile: memory profiling enabled (rate 4096), mem.pprof
Number of entries: %!(EXTRA int=200000)Alloc:      0 MB
Alloc:     46 MB
Alloc:     54 MB
Alloc:     79 MB
Alloc:     61 MB
...
Alloc:     87 MB
Alloc:     69 MB
Alloc:     52 MB
2019/06/17 20:05:16 profile: memory profiling disabled, mem.pprof

real	0m22.117s
user	0m22.646s
sys	0m0.279s
(pprof) top
Showing nodes accounting for 35.22MB, 99.52% of 35.39MB total
Dropped 16 nodes (cum <= 0.18MB)
      flat  flat%   sum%        cum   cum%
   31.88MB 90.07% 90.07%    31.88MB 90.07%  github.com/allegro/bigcache/queue.NewBytesQueue
    3.34MB  9.45% 99.52%    35.23MB 99.55%  github.com/allegro/bigcache.initNewShard
         0     0% 99.52%    35.23MB 99.55%  github.com/allegro/bigcache.NewBigCache
         0     0% 99.52%    35.23MB 99.55%  github.com/allegro/bigcache.newBigCache
         0     0% 99.52%    35.39MB   100%  main.main
         0     0% 99.52%    35.39MB   100%  runtime.main

@janisz
Copy link
Collaborator

janisz commented Jun 18, 2019

What version of go are you using?

@tetafro
Copy link
Author

tetafro commented Jun 18, 2019

go version go1.12.5 darwin/amd64
bigcache v1.2.0

@janisz
Copy link
Collaborator

janisz commented Jan 28, 2020

@tetafro if looks like the problem you mention is related to initial shard size that could be bigger than hard max cache size. I think we should return error in big cache init function but this may break existing installations where users may use smaller values to prevent cache from growing

@scottgerring
Copy link

Ive just discovered this myself and was surprised based on the description in the doco:

		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 8192,

Would limiting the initial shard size to the maximum shard size here fix the problem?

bigcache/shard.go

Lines 306 to 312 in fcb069f

func initNewShard(config Config, callback onRemoveCallback, clock clock) *cacheShard {
return &cacheShard{
hashmap: make(map[uint64]uint32, config.initialShardSize()),
hashmapStats: make(map[uint64]uint32, config.initialShardSize()),
entries: *queue.NewBytesQueue(config.initialShardSize()*config.MaxEntrySize, config.maximumShardSize(), config.Verbose),
entryBuffer: make([]byte, config.MaxEntrySize+headersSizeInBytes),
onRemove: callback,

@janisz
Copy link
Collaborator

janisz commented Jan 30, 2020

I think we should add validation. If initial size is bigger than max size then we should return error or just log it to keep backward compatibility.

@scottgerring
Copy link

Might be worth updating the fairly strong assertion in the readme to clarify when the hard limit wont be honored as well?

@janisz
Copy link
Collaborator

janisz commented Jan 30, 2020

Sure, @scottgerring would you like to contribute a patch?

@leeransetton
Copy link
Contributor

IMO this function

func (c Config) initialShardSize() int {
	return max(c.MaxEntriesInWindow/c.Shards, minimumEntriesInShard)
}

Should be something like:

func (c Config) initialShardSize() int {
	return max(min(c.MaxEntriesInWindow/c.Shards, c.maximumShardSize()), minimumEntriesInShard)
}

wdyt @janisz @scottgerring ?

@janisz
Copy link
Collaborator

janisz commented Jun 17, 2020

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants