Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when initializing multiple instances #97

Closed
zhangchiqing opened this issue Oct 31, 2019 · 4 comments
Closed

Memory leak when initializing multiple instances #97

zhangchiqing opened this issue Oct 31, 2019 · 4 comments
Assignees
Labels
kind/bug Something is broken.

Comments

@zhangchiqing
Copy link

I have the following test case that creates a lot of cache instances, and somehow the memory can not be garbage collected, and therefore used up a lot of memory.

func TestMemLeak(t *testing.T) {
	type state struct {
		cache *ristretto.Cache
	}

	t.Run("test_cache", func(t *testing.T) {
		for i := 0; i < 10; i++ {

			engines := make([]*state, 0)
			for e := 0; e < 100; e++ {
				cache, err := ristretto.NewCache(&ristretto.Config{
					NumCounters: 1e7, // 10x num items on full cache
					MaxCost:     1e6, // 1 MB total size
					BufferItems: 64,  // recommended value
				})
				require.Nil(t, err)
				cache.Close()

				engines = append(engines, &state{
                                        // setting the cache as nil will stop the memory leak
                                        // cache: nil, 
					cache: cache,
				})
			}
		}
	})
}

I measured the memory usage with /usr/bin/time like this:
time -l go test mem_leak_test.go -v -count=1 -run=TestMemLeak

Running the above test will use 8056582144 maximum resident set size, 8GB memory was used.

However, if I set cache: nil then there is no memory leak, 157466624 maximum resident set size, only 150MB was used.

@karlmcguire karlmcguire added the kind/bug Something is broken. label Oct 31, 2019
@karlmcguire
Copy link
Contributor

Looks like cache.Close() is not sufficiently releasing memory.

@karlmcguire karlmcguire self-assigned this Oct 31, 2019
@awfm9
Copy link

awfm9 commented Nov 1, 2019

Hi @karlmcguire, I work with @zhangchiqing, and there was some implicit behaviour that made it hard to identify the problem was on our side. However, going through the ristretto code, it's possible the documentation is inaccurate on the NumCounters parameter. Taking into account the bloom filter, I think I came out at a memory usage that is about 4x what the comment on that parameter claims (2 bytes per entry)?

@karlmcguire
Copy link
Contributor

@awishformore That sounds about right. We have since added multiple counter "rows" (currently 4, which is standard) with seeds for a 2-3% hit ratio increase. I will update the documentation.

@zhangchiqing
Copy link
Author

Thanks @awishformore and @karlmcguire .

Seems there was no leaking, 8G was the actual amount of memory needed given the config like that. The 150MB was just because Go's GC is fast enough to recycle it right away when the cache instance never leaves its local scope.

Closing now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something is broken.
Projects
None yet
Development

No branches or pull requests

3 participants