Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: leaking nested goroutines #16022

Closed
nuivall opened this issue Jun 9, 2016 · 6 comments

Comments

Projects
None yet
6 participants
@nuivall
Copy link

commented Jun 9, 2016

Hello,

  1. What version of Go are you using (go version)?
    go version devel +09eedc3 Wed Jun 8 05:24:53 2016 +0000 darwin/amd64
    tested also on 1.5 and 1.6 with the same results
  2. What did you do?
    https://play.golang.org/p/ATENOgjVp-
  3. What did you expect to see?
    Program waits forever but is holding small amount of memory.
  4. What did you see instead?
    Program holds ~1.3GB of memory, after forcing GC in Point B nothing changes while GC in Point A makes program using only 22MB.

go tool pprof http://localhost:6060/debug/pprof/heap reports less memory usage but still indicates a leak, it is showing that nearly all memory is taken by descendants of runtime.newproc1.

I'm looking forward to answer any questions or hear why observed behaviour is expected :)

@ianlancetaylor ianlancetaylor changed the title gc: leaking nested goroutines runtime: leaking nested goroutines Jun 9, 2016

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

commented Jun 9, 2016

When you say that the program holds ~1.3GB of memory, how are you measuring that?

Have you tried calling https://golang.org/pkg/runtime/debug/#FreeOSMemory ?

@randall77

This comment has been minimized.

Copy link
Contributor

commented Jun 9, 2016

I can reproduce this on Linux with GOMAXPROCS=1 on go1.6.

What is happening is that you are spawning 600,000 goroutines. Depending on how the scheduler behaves, if all are spawned before any finish, you'll use up to ~1.3GB (~2K per goroutine). If goroutines finish faster than you spawn them, then memory usage will be quite low. That's why GOMAXPROCS matters, and why the GC at point A affects things.

When you get to the sleep, Go is not using the ~1.3GB anymore. But it hasn't given that memory back to the OS yet. Wait 5 minutes or call debug.FreeOSMemory.

@adg

This comment has been minimized.

Copy link
Contributor

commented Jun 10, 2016

I believe this is the expected behavior and that this issue should be closed.

@nuivall

This comment has been minimized.

Copy link
Author

commented Jun 10, 2016

Thank you for your responses! I tried FreeOSMemory (btw small nit in the code I posted, I should wrap fmt.Println with a function), I waited 20 minutes, previously I also made system swapping. Nothing changed. I understand that Go likes to stash memory for later but how this pprof output can be explained, why it reports 168MB in runtime.malg 384B bucket (/debug/pprof/goroutine shows only 6 goroutines)?

pprof_tf 3

@randall77

This comment has been minimized.

Copy link
Contributor

commented Jun 11, 2016

The allocations from malg (the goroutine allocator) are for goroutine descriptors. Goroutine descriptors are never freed, we keep a free list of them around. That is issue #9869.

@quentinmit

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2016

I'm going to close this as a dupe of #9869; that does sound like this problem.

@quentinmit quentinmit closed this Jun 17, 2016

@golang golang locked and limited conversation to collaborators Jun 17, 2017

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.