Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: Go routine and writer memory not being released #32124

Open
CyberNinja89 opened this issue May 18, 2019 · 5 comments

Comments

Projects
None yet
3 participants
@CyberNinja89
Copy link

commented May 18, 2019

What version of Go are you using ( go version )?

$ go version
go version go1.12.5 darwin/amd64

What did you do?

https://github.com/CyberNinja89/leaky-parser
This repository contains a small project that demonstrates a scenario where memory is not being released after worker threads finish performing reading and writing tasks. Explicit buffer flushes, os.sync(), file closures, and channel closures have been called, but memory is still being held onto.

What did you expect to see?

The memory usage upon leaving the scope of the function should return to the levels of memory usage before entering the function call.

What did you see instead?

Within a kubernetes environment, this application will consume a majority of the node's memory (8GB) according to Grafana. The application will continue to run at high memory levels (>2 GB) until it the finishes the function. At this point, memory usage will slowly climb down but after time it will stabilize and have released about 20-30%.

In a few tests with kubernetes, we have commented out the os.Write (tried bufio.Write too) in the writer channels. Memory usage held steadily below 200 MB, but at scale, memory being released afterwards was at most 50%.

Locally on Mac, the application memory according to the activity monitor never reached above 20MB, but the memory for cached files kept growing until it reached capacity of the macbook pro. After processing the files, the cached file memory stayed at those levels until application was terminated.

In each of tests we ran with a thousand workers(go routines) for reading and encrypt/decrypt

System details

Kubernetes

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:24:45Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:14:49Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}

Local Machine

go version go1.12.5 darwin/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/*****/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/*****/Projects/gocode"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/opt/go/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/opt/go/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/dev/null"
GOROOT/bin/go version: go version go1.12.5 darwin/amd64
GOROOT/bin/go tool compile -V: compile version go1.12.5
uname -v: Darwin Kernel Version 18.5.0: Mon Mar 11 20:40:32 PDT 2019; root:xnu-4903.251.3~3/RELEASE_X86_64
ProductName:	Mac OS X
ProductVersion:	10.14.4
BuildVersion:	18E226
lldb --version: lldb-1000.11.38.2
  Swift-4.2
@bcmills

This comment has been minimized.

Copy link
Member

commented May 20, 2019

In a few tests with kubernetes, we have commented out the os.Write (tried bufio.Write too) in the writer channels.

There is no os.Write function. Do you mean the call to w.Write?

Memory usage held steadily below 200 MB, but at scale, memory being released afterwards was at most 50%.

That sounds more-or-less as expected: search for GOGC in https://golang.org/pkg/runtime/, and see also debug.FreeOSMemory.

Locally on Mac, the application memory according to the activity monitor never reached above 20MB, but the memory for cached files kept growing until it reached capacity of the macbook pro.

That sounds like a reasonable behavior for a buffered filesystem.

@bcmills

This comment has been minimized.

Copy link
Member

commented May 20, 2019

None of the described behaviors seem at odds with the documented behavior of the Go runtime. Could you give some more detail about what other outcome you expected and why?

@CyberNinja89

This comment has been minimized.

Copy link
Author

commented May 21, 2019

There is no os.Write function. Do you mean the call to w.Write?

you are correct, the current commit I had pushed was my attempt with bufio.Write(). I was stating that we had previously used os.Write().

That sounds more-or-less as expected: search for GOGC in https://golang.org/pkg/runtime/, and see also debug.FreeOSMemory.

I'll do more tests of the memory with GOGC, but I missed stating above that we tested with debug.FreeOSMemory.

Here are some snapshots from Grafana with some of things you suggested I have tested this morning.

Test 1: Current code base from Leaky-Parser
Screen Shot 2019-05-21 at 1 07 53 PM

Test 2: Adding debug.FreeOSMemory on L#98 and L#115
Screen Shot 2019-05-21 at 1 09 30 PM

Test 3: Instead of using bufio.Writer, I switch to os.Write() on L#265 and os.Sync() instead of w.Flush() on L#271
Screen Shot 2019-05-21 at 1 10 35 PM

Test 4: Using os.Write() from Test 3 with debug.FreeOSMemory similar to Test 2
Screen Shot 2019-05-21 at 1 11 56 PM

What I was expecting to see was that memory will be reclaimed after the channels and files are closed especially after decryption, but not how it's hanging around after 10 minutes.

@bcmills

This comment has been minimized.

Copy link
Member

commented May 22, 2019

@bcmills bcmills removed the WaitingForInfo label May 22, 2019

@bcmills bcmills added this to the Go1.14 milestone May 22, 2019

@mknyszek

This comment has been minimized.

Copy link
Contributor

commented May 22, 2019

On Darwin we call MADV_FREE on unused memory to free it before Go 1.13. This means that the memory is free for the OS to take but it's not going to reduce the reported RSS unless it actually takes it back. The behavior you're seeing is not unexpected, and if you apply memory pressure to the system you should see the reported RSS drop.

With that being said in Go 1.13 we use a Darwin-specific madvise flag to return memory to the OS which immediately updates reported RSS. See #29844 for more details.

Also in Go 1.13, we scavenge memory much more aggressively (#30333), so you will no longer need to wait minutes for memory to start being returned to the OS.

Please try running this with Go built at tip. I think this problem should be already solved for Go 1.13.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.