Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: memory not being returned to OS #22439

Closed
randall77 opened this issue Oct 25, 2017 · 24 comments

Comments

Projects
None yet
7 participants
@randall77
Copy link
Contributor

commented Oct 25, 2017

package main

import (
  "bytes"
  "fmt"
  "io/ioutil"
  "math/rand"
  "runtime"
  "time"
)

func main() {
  for i := 0; i < 1000; i++ {
    time.Sleep(time.Millisecond * 1)
    go fakeGetAndSaveData()
  }
  runtime.GC()
  time.Sleep(10 * time.Minute)
}

func fakeGetAndSaveData() {
  var buf bytes.Buffer
  for i := 0; i < 40000; i++ {
    buf.WriteString(fmt.Sprintf("the number is %d\n", i))
  }

  ioutil.WriteFile(fmt.Sprintf("%d.txt", rand.Int()), buf.Bytes(), 0644)
}

Copied from issue #9869 .

Reporter thinks the memory used by this program is not returned to the OS during the Sleep(10*minute) call.

On Linux, I'm using top -b | grep memoryLeak command and on Windows, I use Task Manager. I just ran this again on Ubuntu 16.04 and it looks like it isn't exhibiting the problem after all. When I ran it yesterday it crashed my VM, but it wasn't due to out of memory as I'd assumed. But, I do see the problem clearly on Windows. Sorry, I'll put together a different example for Linux.

Here's what I see on Windows. This is after it's gone idle. Nothing new is being spawned. It's just sitting there. It stays like this for 10 minutes until the program closes.

32016199-25dc4fd4-b978-11e7-835f-16babc2d86ab

@randall77

This comment has been minimized.

Copy link
Contributor Author

commented Oct 25, 2017

@6degreeshealth
Please try and run with the environment variable GODEBUG set to gctrace=1. Let it run to completion.
Grab all the lines starting with scvg and post them here.

My current suspicion is that we're actually returning pages to the OS, but it isn't taking them. We've seen this before, where the OS doesn't actually remove the pages from the process unless there is demand for them elsewhere in the system.

A scavenger trace (the scvg lines) will let us know for sure one way or the other.

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 26, 2017

Please try and run with the environment variable GODEBUG set to gctrace=1. Let it run to completion.
Grab all the lines starting with scvg and post them here.

@randall77 I replaced your 10 minutes sleep with

	time.Sleep(5 * time.Minute)
	fmt.Printf("5 minutes passed\n")
	time.Sleep(5 * time.Minute)

and I am posting whole output

gc 1 @0.006s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 2 @0.010s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 3 @0.012s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 4 @0.014s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 5 @0.019s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 6 @0.024s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 7 @0.027s 0%: 0.39+0.12+0 ms clock, 1.1+0.12/0.12/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 8 @0.035s 0%: 0.41+0.004+0 ms clock, 1.2+0/0/0+0 ms cpu, 13->14->4 MB, 14 MB goal, 4 P
gc 9 @0.038s 0%: 0+1.0+0 ms clock, 0+1.0/1.0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 10 @0.042s 0%: 0+0+1.0 ms clock, 0+0/0/0+2.0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 11 @0.046s 0%: 0.91+0.017+0 ms clock, 2.7+0/0.017/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 12 @0.049s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 13 @0.054s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 14 @0.063s 0%: 0+0+3.5 ms clock, 0+0/0/0+14 ms cpu, 15->15->3 MB, 16 MB goal, 4 P
gc 15 @0.068s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 16 @0.074s 0%: 0+0.50+0 ms clock, 0+1.5/0.50/0+0 ms cpu, 9->9->2 MB, 10 MB goal, 4 P
gc 17 @0.075s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 18 @0.078s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 19 @0.081s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 20 @0.084s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 21 @0.087s 0%: 0+0.99+0 ms clock, 0+2.9/0.99/0+0 ms cpu, 7->8->4 MB, 9 MB goal, 4 P
gc 22 @0.090s 0%: 1.0+0+0 ms clock, 4.0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 23 @0.095s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 24 @0.100s 0%: 0+0.73+0 ms clock, 0+0.007/0.73/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 25 @0.107s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 14->14->4 MB, 15 MB goal, 4 P
gc 26 @0.113s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 27 @0.116s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->3 MB, 7 MB goal, 4 P
gc 28 @0.119s 0%: 0.69+0.067+0 ms clock, 2.0+0/0/0+0 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 29 @0.123s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 30 @0.127s 0%: 0+0.99+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 31 @0.131s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->1 MB, 7 MB goal, 4 P
gc 32 @0.134s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 33 @0.135s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 34 @0.141s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->10->6 MB, 9 MB goal, 4 P
gc 35 @0.146s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->0 MB, 12 MB goal, 4 P
gc 36 @0.150s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 37 @0.152s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 38 @0.154s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 39 @0.157s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 40 @0.158s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 41 @0.163s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 42 @0.169s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 43 @0.172s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 44 @0.176s 0%: 0.50+0.12+0 ms clock, 2.0+0/0/0+0 ms cpu, 10->10->6 MB, 11 MB goal, 4 P
gc 45 @0.182s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->7 MB, 13 MB goal, 4 P
gc 46 @0.190s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 15->15->4 MB, 16 MB goal, 4 P
gc 47 @0.194s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 48 @0.198s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 49 @0.203s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 50 @0.206s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 51 @0.212s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->6 MB, 13 MB goal, 4 P
gc 52 @0.217s 0%: 0+0+1.0 ms clock, 0+0/0/0+4.0 ms cpu, 12->12->6 MB, 13 MB goal, 4 P
gc 53 @0.223s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 11->11->6 MB, 12 MB goal, 4 P
gc 54 @0.229s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->6 MB, 13 MB goal, 4 P
gc 55 @0.233s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 11->11->4 MB, 12 MB goal, 4 P
gc 56 @0.239s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 57 @0.242s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 58 @0.245s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 59 @0.249s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 60 @0.253s 0%: 1.0+0+0 ms clock, 2.0+0/0/0+0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 61 @0.257s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 62 @0.260s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 63 @0.264s 0%: 0.50+0+0 ms clock, 2.0+0/0/0+0 ms cpu, 8->9->3 MB, 9 MB goal, 4 P
gc 64 @0.268s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->2 MB, 8 MB goal, 4 P
gc 65 @0.270s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 66 @0.273s 0%: 0+0.50+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 67 @0.279s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->5 MB, 12 MB goal, 4 P
gc 68 @0.285s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 10->12->5 MB, 13 MB goal, 4 P
gc 69 @0.289s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 70 @0.296s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->3 MB, 14 MB goal, 4 P
gc 71 @0.299s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 72 @0.304s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->2 MB, 9 MB goal, 4 P
gc 73 @0.305s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 74 @0.310s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->2 MB, 8 MB goal, 4 P
gc 75 @0.311s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 76 @0.316s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 77 @0.321s 0%: 0+0.55+0 ms clock, 0+0/0.004/0+0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 78 @0.325s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->7 MB, 11 MB goal, 4 P
gc 79 @0.331s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 14->14->4 MB, 15 MB goal, 4 P
gc 80 @0.336s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 81 @0.339s 0%: 0.86+0.12+0 ms clock, 3.4+0/0.12/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 82 @0.343s 0%: 0.34+0.018+0 ms clock, 1.3+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 83 @0.348s 0%: 1.0+0+0 ms clock, 4.0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 84 @0.352s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 85 @0.356s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->2 MB, 10 MB goal, 4 P
gc 86 @0.358s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 87 @0.361s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 88 @0.364s 0%: 0+0.50+0 ms clock, 0+0.50/0.50/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 89 @0.367s 0%: 0.50+0+0 ms clock, 1.0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 90 @0.372s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 91 @0.375s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 92 @0.380s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 93 @0.386s 0%: 0.65+0.11+0 ms clock, 2.6+0/0/0+0 ms cpu, 14->14->7 MB, 15 MB goal, 4 P
gc 94 @0.393s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 14->14->4 MB, 15 MB goal, 4 P
gc 95 @0.397s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 96 @0.401s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->2 MB, 8 MB goal, 4 P
gc 97 @0.403s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 98 @0.406s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 99 @0.409s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 100 @0.415s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->5 MB, 12 MB goal, 4 P
gc 101 @0.421s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 102 @0.426s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 103 @0.432s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 13->13->6 MB, 14 MB goal, 4 P
gc 104 @0.438s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->7 MB, 14 MB goal, 4 P
gc 105 @0.443s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->4 MB, 14 MB goal, 4 P
gc 106 @0.448s 0%: 0+0+1.0 ms clock, 0+0/0/0+4.0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 107 @0.453s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 108 @0.456s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 109 @0.459s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 6->6->2 MB, 7 MB goal, 4 P
gc 110 @0.461s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 111 @0.465s 0%: 0+0.50+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->2 MB, 7 MB goal, 4 P
gc 112 @0.468s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 113 @0.472s 0%: 0+0.15+0 ms clock, 0+0/0.15/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 114 @0.477s 0%: 0.51+0.062+0 ms clock, 2.0+0/0/0+0 ms cpu, 11->11->4 MB, 12 MB goal, 4 P
gc 115 @0.481s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 116 @0.487s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->13->6 MB, 13 MB goal, 4 P
gc 117 @0.493s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 13->13->4 MB, 14 MB goal, 4 P
gc 118 @0.498s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->2 MB, 9 MB goal, 4 P
gc 119 @0.501s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 120 @0.502s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 121 @0.505s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 122 @0.509s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->7 MB, 10 MB goal, 4 P
gc 123 @0.515s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->6 MB, 14 MB goal, 4 P
gc 124 @0.520s 0%: 0.061+0.14+0 ms clock, 0.24+0/0/0+0 ms cpu, 13->13->9 MB, 14 MB goal, 4 P
gc 125 @0.528s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 18->18->9 MB, 19 MB goal, 4 P
gc 126 @0.538s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 19->19->2 MB, 20 MB goal, 4 P
gc 127 @0.540s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 128 @0.543s 0%: 0+1.0+0 ms clock, 0+3.0/1.0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 129 @0.545s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 130 @0.549s 0%: 0.90+0.13+0 ms clock, 3.6+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 131 @0.552s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 132 @0.557s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 133 @0.561s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 134 @0.566s 0%: 0+0+0.50 ms clock, 0+0/0/0+2.0 ms cpu, 10->10->5 MB, 11 MB goal, 4 P
gc 135 @0.570s 0%: 0.50+0+0 ms clock, 1.5+0/0/0+0 ms cpu, 11->12->7 MB, 12 MB goal, 4 P
gc 136 @0.578s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 15->15->5 MB, 16 MB goal, 4 P
gc 137 @0.584s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->3 MB, 11 MB goal, 4 P
gc 138 @0.588s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->8->3 MB, 8 MB goal, 4 P
gc 139 @0.590s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 140 @0.596s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 141 @0.600s 0%: 0+1.0+0 ms clock, 0+0/1.0/0+0 ms cpu, 8->8->2 MB, 9 MB goal, 4 P
gc 142 @0.602s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 143 @0.606s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 144 @0.608s 0%: 0+1.0+0 ms clock, 0+1.0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 145 @0.613s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->2 MB, 9 MB goal, 4 P
gc 146 @0.614s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 147 @0.616s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 148 @0.620s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 149 @0.625s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 150 @0.630s 0%: 0+1.0+0 ms clock, 0+0/1.0/0+0 ms cpu, 9->10->6 MB, 10 MB goal, 4 P
gc 151 @0.636s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->3 MB, 13 MB goal, 4 P
gc 152 @0.639s 0%: 0.75+0.036+0 ms clock, 3.0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 153 @0.644s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 154 @0.652s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->5 MB, 13 MB goal, 4 P
gc 155 @0.658s 0%: 0+4.5+0 ms clock, 0+1.9/0/0+0 ms cpu, 10->11->3 MB, 11 MB goal, 4 P
gc 156 @0.668s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 6->7->2 MB, 7 MB goal, 4 P
gc 157 @0.671s 0%: 0+0.50+0 ms clock, 0+1.0/0.50/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 158 @0.674s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 7 MB goal, 4 P
gc 159 @0.675s 0%: 0+0.50+0 ms clock, 0+0/0.50/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 160 @0.681s 0%: 0+3.0+0 ms clock, 0+1.0/0/0+0 ms cpu, 8->9->5 MB, 9 MB goal, 4 P
gc 161 @0.689s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 162 @0.695s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 163 @0.700s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 164 @0.707s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 16->16->8 MB, 17 MB goal, 4 P
gc 165 @0.718s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 16->16->4 MB, 17 MB goal, 4 P
gc 166 @0.722s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 167 @0.725s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 168 @0.730s 0%: 0+0+1.0 ms clock, 0+0/0/0+4.0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 169 @0.734s 0%: 1.0+0+0 ms clock, 4.0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 170 @0.738s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->7 MB, 10 MB goal, 4 P
gc 171 @0.744s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->6 MB, 14 MB goal, 4 P
gc 172 @0.749s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->8 MB, 13 MB goal, 4 P
gc 173 @0.757s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 16->16->8 MB, 17 MB goal, 4 P
gc 174 @0.767s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 17->17->2 MB, 18 MB goal, 4 P
gc 175 @0.770s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 176 @0.772s 0%: 0+0.32+0.19 ms clock, 0+0.64/0/0+0.76 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 177 @0.775s 0%: 0+0.50+0 ms clock, 0+0.56/0.031/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 178 @0.779s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 179 @0.782s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 180 @0.789s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->3 MB, 13 MB goal, 4 P
gc 181 @0.792s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 182 @0.798s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->5 MB, 12 MB goal, 4 P
gc 183 @0.804s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->6 MB, 11 MB goal, 4 P
gc 184 @0.809s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->5 MB, 13 MB goal, 4 P
gc 185 @0.815s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->2 MB, 11 MB goal, 4 P
gc 186 @0.817s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->3 MB, 7 MB goal, 4 P
gc 187 @0.821s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->1 MB, 7 MB goal, 4 P
gc 188 @0.822s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 189 @0.825s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 190 @0.828s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 191 @0.832s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 192 @0.835s 0%: 0+0.33+0 ms clock, 0+0/0.33/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 193 @0.839s 0%: 0+0.99+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->5 MB, 11 MB goal, 4 P
gc 194 @0.844s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->3 MB, 11 MB goal, 4 P
gc 195 @0.849s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 196 @0.852s 0%: 1.0+0+0 ms clock, 2.0+0/0/0+0 ms cpu, 8->9->4 MB, 9 MB goal, 4 P
gc 197 @0.857s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 198 @0.860s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 199 @0.865s 0%: 0+0.075+0 ms clock, 0+0/0.075/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 200 @0.871s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->3 MB, 11 MB goal, 4 P
gc 201 @0.874s 0%: 0+0+0.50 ms clock, 0+0/0/0+1.5 ms cpu, 8->8->6 MB, 9 MB goal, 4 P
gc 202 @0.879s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->6 MB, 12 MB goal, 4 P
gc 203 @0.885s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->2 MB, 13 MB goal, 4 P
gc 204 @0.887s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 205 @0.890s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 206 @0.893s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 207 @0.898s 0%: 1.0+0+0 ms clock, 2.0+0/0/0+0 ms cpu, 11->11->6 MB, 12 MB goal, 4 P
gc 208 @0.904s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->5 MB, 13 MB goal, 4 P
gc 209 @0.908s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->5 MB, 11 MB goal, 4 P
gc 210 @0.914s 0%: 0+1.0+0 ms clock, 0+0/1.0/0+0 ms cpu, 11->11->4 MB, 12 MB goal, 4 P
gc 211 @0.920s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 212 @0.924s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->6 MB, 11 MB goal, 4 P
gc 213 @0.930s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->4 MB, 14 MB goal, 4 P
gc 214 @0.934s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 215 @0.939s 0%: 0+1.0+0 ms clock, 0+2.0/1.0/0+0 ms cpu, 10->10->3 MB, 11 MB goal, 4 P
gc 216 @0.942s 0%: 0+1.0+0 ms clock, 0+3.0/1.0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 217 @0.946s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->5 MB, 11 MB goal, 4 P
gc 218 @0.952s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 219 @0.955s 0%: 1.0+0+0 ms clock, 4.0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 220 @0.960s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 221 @0.964s 0%: 0+0.18+0 ms clock, 0+0/0.005/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 222 @0.968s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->3 MB, 10 MB goal, 4 P
gc 223 @0.973s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->1 MB, 8 MB goal, 4 P
gc 224 @0.975s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 225 @0.977s 0%: 0+0.49+0 ms clock, 0+0.99/0.49/0+0 ms cpu, 5->5->1 MB, 6 MB goal, 4 P
gc 226 @0.980s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 227 @0.982s 0%: 0.73+0.10+0 ms clock, 2.9+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 228 @0.986s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 229 @0.990s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 230 @0.993s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 231 @0.998s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 232 @1.001s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 233 @1.006s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->4 MB, 12 MB goal, 4 P
gc 234 @1.009s 0%: 0+1.0+0 ms clock, 0+1.0/1.0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 235 @1.013s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 236 @1.017s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 237 @1.020s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 238 @1.024s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 239 @1.029s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->3 MB, 11 MB goal, 4 P
gc 240 @1.032s 0%: 0.44+0.018+0 ms clock, 1.7+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 241 @1.036s 0%: 0.88+0.13+0 ms clock, 3.5+0/0/0+0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 242 @1.040s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 243 @1.044s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 244 @1.047s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 245 @1.050s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->3 MB, 7 MB goal, 4 P
gc 246 @1.053s 0%: 0+1.0+0.99 ms clock, 0+2.0/1.0/0+3.9 ms cpu, 7->7->3 MB, 8 MB goal, 4 P
gc 247 @1.057s 0%: 0+0+1.0 ms clock, 0+0/0/0+4.0 ms cpu, 6->7->4 MB, 7 MB goal, 4 P
gc 248 @1.063s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 249 @1.067s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 250 @1.071s 0%: 0+0+1.5 ms clock, 0+0/0/0+4.5 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 251 @1.074s 0%: 0+0+0.50 ms clock, 0+0/0/0+1.5 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 252 @1.078s 0%: 0.50+0+0 ms clock, 1.5+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 253 @1.084s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 12->12->4 MB, 13 MB goal, 4 P
gc 254 @1.088s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 255 @1.092s 0%: 0+0.46+0 ms clock, 0+0.010/0.46/0+0 ms cpu, 8->8->4 MB, 9 MB goal, 4 P
gc 256 @1.097s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 257 @1.103s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->2 MB, 11 MB goal, 4 P
gc 258 @1.105s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 259 @1.106s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 260 @1.110s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->6 MB, 9 MB goal, 4 P
gc 261 @1.118s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->4 MB, 14 MB goal, 4 P
gc 262 @1.121s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 263 @1.126s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->1 MB, 11 MB goal, 4 P
gc 264 @1.128s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 265 @1.131s 0%: 0+1.0+0 ms clock, 0+0/1.0/0+0 ms cpu, 5->5->2 MB, 6 MB goal, 4 P
gc 266 @1.134s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 267 @1.139s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->4 MB, 10 MB goal, 4 P
gc 268 @1.143s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->8->3 MB, 9 MB goal, 4 P
gc 269 @1.145s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->5 MB, 7 MB goal, 4 P
gc 270 @1.151s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->2 MB, 10 MB goal, 4 P
gc 271 @1.153s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 272 @1.156s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->3 MB, 7 MB goal, 4 P
gc 273 @1.158s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 274 @1.163s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->6 MB, 11 MB goal, 4 P
gc 275 @1.169s 0%: 0+0.50+1.5 ms clock, 0+1.5/0.50/0+4.5 ms cpu, 13->13->6 MB, 14 MB goal, 4 P
gc 276 @1.178s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->5 MB, 13 MB goal, 4 P
gc 277 @1.182s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->4 MB, 12 MB goal, 4 P
gc 278 @1.185s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 279 @1.189s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 280 @1.195s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->6 MB, 12 MB goal, 4 P
gc 281 @1.202s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->2 MB, 14 MB goal, 4 P
gc 282 @1.205s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 283 @1.205s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 5->6->5 MB, 6 MB goal, 4 P
gc 284 @1.210s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->5 MB, 11 MB goal, 4 P
gc 285 @1.216s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->11->6 MB, 11 MB goal, 4 P
gc 286 @1.222s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->5 MB, 12 MB goal, 4 P
gc 287 @1.227s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 11->11->3 MB, 12 MB goal, 4 P
gc 288 @1.230s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 289 @1.234s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->10->7 MB, 10 MB goal, 4 P
gc 290 @1.240s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->4 MB, 14 MB goal, 4 P
gc 291 @1.245s 0%: 0+0+1.0 ms clock, 0+0/0/0+3.0 ms cpu, 8->8->2 MB, 9 MB goal, 4 P
gc 292 @1.248s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 293 @1.250s 0%: 0+0.60+0 ms clock, 0+1.2/0.60/0+0 ms cpu, 4->4->3 MB, 5 MB goal, 4 P
gc 294 @1.252s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 295 @1.255s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 8->9->5 MB, 9 MB goal, 4 P
gc 296 @1.260s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 297 @1.267s 0%: 0+0.50+0 ms clock, 0+1.0/0.50/0+0 ms cpu, 15->15->6 MB, 16 MB goal, 4 P
gc 298 @1.273s 0%: 0.50+0+0 ms clock, 1.5+0/0/0+0 ms cpu, 13->13->6 MB, 14 MB goal, 4 P
gc 299 @1.279s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->3 MB, 12 MB goal, 4 P
gc 300 @1.282s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 6->6->4 MB, 7 MB goal, 4 P
gc 301 @1.286s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->5 MB, 8 MB goal, 4 P
gc 302 @1.289s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 9->9->5 MB, 10 MB goal, 4 P
gc 303 @1.295s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 304 @1.302s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 14->14->7 MB, 15 MB goal, 4 P
gc 305 @1.308s 0%: 0.82+0.12+0 ms clock, 3.3+0/0/0+0 ms cpu, 15->15->7 MB, 16 MB goal, 4 P
gc 306 @1.314s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 14->14->6 MB, 15 MB goal, 4 P
gc 307 @1.321s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->1 MB, 12 MB goal, 4 P
gc 308 @1.323s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 309 @1.327s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->3 MB, 6 MB goal, 4 P
gc 310 @1.327s 0%: 1.0+0+0 ms clock, 3.0+0/0/0+0 ms cpu, 6->6->5 MB, 7 MB goal, 4 P
gc 311 @1.333s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 10->10->4 MB, 11 MB goal, 4 P
gc 312 @1.337s 0%: 0.45+0.070+0 ms clock, 1.8+0/0/0+0 ms cpu, 9->9->6 MB, 10 MB goal, 4 P
gc 313 @1.344s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->7 MB, 14 MB goal, 4 P
gc 314 @1.350s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 13->13->2 MB, 14 MB goal, 4 P
gc 315 @1.352s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 316 @1.355s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 5->5->2 MB, 6 MB goal, 4 P
gc 317 @1.357s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 318 @1.444s 0%: 0+0.99+0 ms clock, 0+0/0.99/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
gc 319 @1.446s 0%: 1.0+0.98+0 ms clock, 4.0+0/0/0+0 ms cpu, 5->5->4 MB, 6 MB goal, 4 P
gc 320 @1.461s 0%: 0+1.0+0 ms clock, 0+3.0/1.0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 4 P
gc 321 @1.475s 0%: 0+1.0+0 ms clock, 0+3.0/1.0/0+0 ms cpu, 8->8->5 MB, 9 MB goal, 4 P
gc 322 @1.489s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 11->11->7 MB, 12 MB goal, 4 P
gc 323 @1.501s 0%: 0+1.0+0 ms clock, 0+3.0/1.0/0+0 ms cpu, 14->14->8 MB, 15 MB goal, 4 P
gc 324 @1.514s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 17->17->11 MB, 18 MB goal, 4 P
gc 325 @1.525s 0%: 0+1.0+0 ms clock, 0+0/1.0/0+0 ms cpu, 22->22->14 MB, 23 MB goal, 4 P
gc 326 @1.538s 0%: 0+1.0+0 ms clock, 0+3.0/1.0/0+0 ms cpu, 28->28->18 MB, 29 MB goal, 4 P
gc 327 @1.553s 0%: 0+1.0+0 ms clock, 0+0/1.0/0+0 ms cpu, 36->36->24 MB, 37 MB goal, 4 P
gc 328 @1.575s 0%: 0+0.50+0 ms clock, 0+1.0/0.50/0+0 ms cpu, 48->48->31 MB, 49 MB goal, 4 P
gc 329 @1.603s 0%: 0+1.0+0 ms clock, 0+0/0/0+0 ms cpu, 61->61->40 MB, 63 MB goal, 4 P
gc 330 @1.638s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 80->80->53 MB, 81 MB goal, 4 P
gc 331 @1.690s 0%: 1.0+0+0 ms clock, 1.0+0/0/0+0 ms cpu, 104->104->68 MB, 107 MB goal, 4 P
gc 332 @1.756s 0%: 0+0.99+0 ms clock, 0+1.9/0.99/0+0 ms cpu, 134->134->89 MB, 137 MB goal, 4 P
gc 333 @1.844s 0%: 0+1.0+0 ms clock, 0+1.0/1.0/0+0 ms cpu, 174->174->74 MB, 178 MB goal, 4 P
gc 334 @1.914s 0%: 1.0+0+0 ms clock, 1.0+0/0/0+0 ms cpu, 144->144->75 MB, 148 MB goal, 4 P
gc 335 @1.978s 0%: 0+0.50+1.0 ms clock, 0+0/0.50/0+1.0 ms cpu, 148->148->97 MB, 151 MB goal, 4 P
gc 336 @2.060s 0%: 0+1.0+0 ms clock, 0+1.0/1.0/0+0 ms cpu, 190->190->126 MB, 195 MB goal, 4 P
gc 337 @2.167s 0%: 0+0.50+0 ms clock, 0+1.5/0.50/0+0 ms cpu, 245->246->162 MB, 252 MB goal, 4 P
gc 338 @2.303s 0%: 0+1.0+0 ms clock, 0+1.0/1.0/0+0 ms cpu, 317->317->210 MB, 325 MB goal, 4 P
gc 339 @2.479s 0%: 0+1.0+0 ms clock, 0+0/1.0/0.50+0 ms cpu, 409->410->272 MB, 420 MB goal, 4 P
gc 340 @2.719s 0%: 0+6.0+0 ms clock, 0+0/2.0/0+0 ms cpu, 530->531->341 MB, 544 MB goal, 4 P
gc 341 @3.019s 0%: 0+2.5+0 ms clock, 0+0.50/2.5/0+0 ms cpu, 666->667->431 MB, 683 MB goal, 4 P
gc 342 @3.406s 0%: 0+3.0+0 ms clock, 0+0.50/2.5/0+0 ms cpu, 841->841->438 MB, 862 MB goal, 4 P
gc 343 @3.818s 0%: 0+2.5+0 ms clock, 0+1.0/2.5/0+0 ms cpu, 855->855->311 MB, 877 MB goal, 4 P
gc 344 @4.023s 0%: 0+1.5+0 ms clock, 0+0.50/1.4/0+0 ms cpu, 537->537->339 MB, 623 MB goal, 4 P (forced)
GC forced
gc 345 @124.956s 0%: 0+1.5+0 ms clock, 0+0/1.5/3.5+0 ms cpu, 347->347->0 MB, 678 MB goal, 4 P
scvg0: inuse: 1, idle: 855, sys: 857, released: 0, consumed: 857 (MB)
GC forced
gc 346 @244.978s 0%: 0+1.0+0 ms clock, 0+0/0/1.0+0 ms cpu, 0->0->0 MB, 4 MB goal, 4 P
5 minutes passed
scvg1: 8 MB released
scvg1: inuse: 1, idle: 855, sys: 857, released: 8, consumed: 849 (MB)
GC forced
gc 347 @424.055s 0%: 0+0.99+0 ms clock, 0+0/0/0.99+0 ms cpu, 0->0->0 MB, 4 MB goal, 4 P
GC forced
scvg2: 754 MB released
scvg2: inuse: 1, idle: 855, sys: 857, released: 762, consumed: 94 (MB)
gc 348 @544.077s 0%: 0+66+0 ms clock, 0+0/0/0+0 ms cpu, 0->0->0 MB, 4 MB goal, 4 P

Alex

@randall77

This comment has been minimized.

Copy link
Contributor Author

commented Oct 26, 2017

The Go runtime claims it returned all but 94MB to the OS (the scvg2 lines).
Maybe my hunch earlier was correct, or maybe the "Memory" reported is virtual, not physical.
In any case, I suspect the error is with the observations, not with Go. I think we need an expert familiar with the Windows virtual memory system to know for sure.

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 26, 2017

I downloaded VMMap program to see what is going on with this process. And it does uses a lot of memory. This particular program creates quite a few threads, and OS allocates 2MB for each thread stack space. @aclements merged CL 49331 recently, that changed oh64.SizeOfStackCommit from 0x00001000 to 0x00200000. I think it is a mistake (and I did not notice at the time). We should change it back to make thread initial stack size small again.

It would be nice to have a test to prevent mistakes like that in the future. I could, probably, write a test that measures committed memory of an external program. But what should I compare that value with?

Alex

@randall77

This comment has been minimized.

Copy link
Contributor Author

commented Oct 26, 2017

Ah, I see, this might be Ms, not Gs or the heap, that are using up all the memory.
In CL 49331, Austin claimed large M stacks just use virtual memory, not physical memory. Are you disputing that?

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 26, 2017

In CL 49331, Austin claimed large M stacks just use virtual memory, not physical memory. Are you disputing that?

CL 49331 intended to change just the use of virtual memory, but, I think, there is a mistake in CL 49331 - change of oh64.SizeOfStackCommit from 0x00001000 to 0x00200000 actually affects physical memory.

Windows memory management is quite complicated, but, for the purpose of this discussion, each page of process addressed memory could be one of three: free, reserved and committed (see https://msdn.microsoft.com/en-us/library/windows/desktop/aa366794(v=vs.85).aspx ) Only "committed"
memory require actual physical memory to back it up.

When process starts, EXE file describes to OS how stacks should be managed. In particular PE file format specifies the maximum size of the stack to be used (stored in oh64.SizeOfStackReserve). When process creates new thread, it "reserves" oh64.SizeOfStackReserve bytes for each stack. PE file also specify how much of oh64.SizeOfStackReserve bytes should be "committed" at start of each thread (this is stored in oh64.SizeOfStackCommit). So normal stacks should start with large "reserved" stack, with some small part of it "committed" at the beginning. As stack grows, "committed" part grows too - when program grows beyond "committed" area, it tries to read / write memory, this causes "system exception", and memory manager handles these exceptions by making more "committed" memory and restarting failed code again.

So for a particular case of non-cgo windows/amd64 we had:

oh64.SizeOfStackReserve = 0x00020000
oh64.SizeOfStackCommit = 0x00001000

before CL 49331, and:

oh64.SizeOfStackReserve = 0x00200000
oh64.SizeOfStackCommit = 0x00200000 - 0x2000 // account for 2 guard pages

after CL 49331. So the max stack size got increased from 132K to 2M, but "committed" part of the stack also was increased from 4K to 2M. I think we should not have done the second change, because now every new thread cost us 2M of physical memory comparing to 4K before CL 49331. And once "committed", stack memory cannot be freed - I suspect only exiting thread will free its stack memory.

I hope I explained it well enough.

Alex

@randall77

This comment has been minimized.

Copy link
Contributor Author

commented Oct 27, 2017

Yes, that all makes sense, thanks.
Can someone test such a change on a Windows machine, and see if it helps this case?

And once "committed", stack memory cannot be freed - I suspect only exiting thread will free its > stack memory.

Are we ever exiting Ms that have been idle for a long time? If not, then doing that might fix the issue. Maybe an additional task for the scavenger to do.

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 27, 2017

I see. It looks like I just did what we were already doing in the cgo case. I don't know why we were committing basically the whole stack under cgo in the first place. I'm certainly fine with reducing the commit if that works.

Are we ever exiting Ms that have been idle for a long time? If not, then doing that might fix the issue. Maybe an additional task for the scavenger to do.

I recently added the framework necessary to do this, but right now we only exit locked Ms (when their Gs exit without unlocking). We don't use this for exiting idle Ms currently, but we could.

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 27, 2017

Can someone test such a change on a Windows machine, and see if it helps this case?

I did test that yesterday. Setting oh64.SizeOfStackCommit back to 0x00001000 helps with memory usage significantly. We can get @6degreeshealth check that it helps too to be sure. I am worried that we have broken this thing without anyone noticing. I wonder if I could write some test to stop this happening in the future.

Are we ever exiting Ms that have been idle for a long time?

I don't think so. But Austin will know.

If not, then doing that might fix the issue. Maybe an additional task for the scavenger to do.

Yes, the fact that thread stacks consume memory (even when threads are idle) might be a good reason to exit them.

I see. It looks like I just did what we were already doing in the cgo case. I don't know why we were committing basically the whole stack under cgo in the first place.

cgo case is very different from non-cgo case as far as I am concerned. I hope not many people use cgo. Especially on Windows, because we have syscall.Syscall to easily call into Windows C code without cgo. And cgo executables are created by external linker, so I am not sure if we can control SizeOfStackReserve and SizeOfStackCommit fields in the PE exe. Also we use _beginthead function to start threads with cgo. I do not know if _beginthread allow us control "commit" stack size. I probably dropped the ball with cgo. Do we want to try and improve this for cgo case too?

I'm certainly fine with reducing the commit if that works.

Cool. I will send the change. The change should be simple, I just wonder about the test.

Alex

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 27, 2017

And cgo executables are created by external linker, so I am not sure if we can control SizeOfStackReserve and SizeOfStackCommit fields in the PE exe.

That's true. Though I would expect the external linker to set reasonable "normal" stack sizes for Windows, which presumably have a small commit?

Also we use _beginthead function to start threads with cgo. I do not know if _beginthread allow us control "commit" stack size.

We pass 0 for the stack size, which MSDN says means to "the same value as the stack that's specified for the main thread", which I expect means to use both the reservation and commit from the PE header.

The change should be simple, I just wonder about the test.

Start lots of threads and if the system crashes the test fails? :)

@robarchibald

This comment has been minimized.

Copy link

commented Oct 27, 2017

I would be happy to rest, but I'm on a business trip away from my Windows computer. I won't be able to test until Saturday. I assume that to test I'll need to build Go from source using the latest source. Never done that before, but I'll give it a whirl.

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 27, 2017

That's true. Though I would expect the external linker to set reasonable "normal" stack sizes for Windows, which presumably have a small commit?

I will see what we get with external linker.

Start lots of threads and if the system crashes the test fails? :)

I was actually hoping you will be helpful. ;-)

I would be happy to rest

That would be nice, if you test when my change is ready. Thank you.

but I'm on a business trip away from my Windows computer. I won't be able to test until Saturday.

The change is not ready, so do not worry.

I assume that to test I'll need to build Go from source using the latest source. Never done that before, but I'll give it a whirl.

Yes, you will need to build Go from source. Here https://golang.org/doc/install/source are the instructions. We will help you if you get into trouble.

Alex

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 27, 2017

For the test, I was thinking the Windows equivalent of this:

package main

import (
	"fmt"
	"io/ioutil"
	"runtime"
	"strconv"
	"strings"
	"sync"
	"syscall"
)

func main() {
	var wg sync.WaitGroup
	for i := 0; i < 1000; i++ {
		wg.Add(1)
		go func() {
			runtime.LockOSThread()
			wg.Done()
			select {}
		}()
	}
	wg.Wait()

	vsize, rss := mem()
	fmt.Printf("%d bytes RSS (%d bytes virtual)\n", rss, vsize)
}

func mem() (vsize, rss uintptr) {
	stat, err := ioutil.ReadFile("/proc/self/stat")
	if err != nil {
		panic(err)
	}
	parts := strings.Fields(string(stat))
	parseInt := func(x string) uintptr {
		y, err := strconv.ParseInt(x, 10, 64)
		if err != nil {
			panic(err)
		}
		return uintptr(y)
	}
	vsize = parseInt(parts[22])
	rss = parseInt(parts[23]) * uintptr(syscall.Getpagesize())
	return
}

E.g., on Linux/amd64, I get a mere "3973120 bytes RSS (52129792 bytes virtual)".

I'm not sure what the Windows equivalent of /proc/self/stat's RSS is. Maybe PROCESS_MEMORY_COUNTERS_EX.PrivateUsage?

Unfortunately, this probably has to be a runtime/testdata/testprog thing rather than a regular test to avoid interference from other tests' memory usage.

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 27, 2017

For the test, I was thinking the Windows equivalent of this:

Thank you, Austin. I will have a go.

Alex

@gopherbot

This comment has been minimized.

Copy link

commented Oct 30, 2017

Change https://golang.org/cl/74490 mentions this issue: cmd/link: restore windows stack commit size back to 4KB

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 30, 2017

I'm not sure what the Windows equivalent of /proc/self/stat's RSS is. Maybe PROCESS_MEMORY_COUNTERS_EX.PrivateUsage?

Sounds correct to me. That is what I have used in my CL 74490 anyway.

@robarchibald if you want to see if your problem is fixed, you need to build version of Go that uses this source code:

git fetch https://go.googlesource.com/go refs/changes/90/74490/1 && git checkout FETCH_HEAD

Please, let use know how it goes. Thank you.

Alex

@ghost

This comment has been minimized.

Copy link

commented Oct 31, 2017

Thanks @alexbrainman, @aclements, @randall77 for your work on this. Unfortunately, the problem doesn't seem to be resolved. The fix is only marginally better. What's interesting today is that if I run this program more than once it will behave differently each time. I'm seeing it vary between 70 MB and 1.5 GB of memory. I can only get to 1.5 GB of memory if I run two of the memoryLeak.exe at the same time. Why would the memory usage skyrocket when system utilization is higher? It's as if the system gets heavily utilized and suddenly it can't keep up and starts over-allocating or forgetting it has the memory allocated. Is it possible (gasp) that the garbage collector has a concurrency problem?

I changed the time.Sleep(10 * time.Second) with

	for i := 1; i <= 11; i++ {
		time.Sleep(1 * time.Minute)
		fmt.Printf("%d minutes has passed\n", i)
	}

Interestingly, today Go is eventually cleaning up after itself, but not until somewhere between 8 and 10 minutes. I noticed the memory go down on one run so I switched the wait to be 11 minutes instead of 10 so I wouldn't mistake any cleanup as the process just killing itself. That's also why I added more output logging as shown above. Is there a way I can trick the GC to clean up sooner? I put runtime.GC() in the code, but it doesn't seem to do anything. It appears that the scavenger doesn't notice that it has an extra 1686 and 1625 MB of memory until 8-10 minutes into program execution. Once it reports this extra memory, the OS dutifully cleans up. Why does Go keep such a messy room for so long?

Here is the full gctrace for the memoryLeak.exe and memoryLeak_fix.exe (compiled using the recent fix). I ran these concurrent with one another from the command Line.

memoryLeak.exe

gc 1 @18445023415.344s 0%: 0+2.9+0 ms clock, 0+0/0/0+0 ms cpu, 4->5->1 MB, 5 MB goal, 8 P
gc 2 @18445023415.349s 0%: 0+2.0+0.49 ms clock, 0+0/0.50/0+0.99 ms cpu, 4->4->2 MB, 5 MB goal, 8 P
gc 3 @18445023415.352s 0%: 0+0.49+0.49 ms clock, 0+1.4/0.99/0+0.49 ms cpu, 4->4->2 MB, 5 MB goal, 8 P
gc 4 @18445023415.355s 0%: 0+0.50+0.49 ms clock, 0+0/1.0/0+0.99 ms cpu, 6->6->4 MB, 7 MB goal, 8 P
gc 5 @18445023415.358s 0%: 0+3.0+0 ms clock, 0+0/0/0+0 ms cpu, 7->7->4 MB, 9 MB goal, 8 P
gc 6 @18445023415.365s 0%: 0.49+0.50+0.49 ms clock, 1.9+0/0/0+1.9 ms cpu, 8->8->5 MB, 9 MB goal, 8 P
gc 7 @18445023415.371s 0%: 3.9+0.50+0 ms clock, 3.9+1.0/1.0/0+0 ms cpu, 10->10->6 MB, 11 MB goal, 8 P
gc 8 @18445023415.383s 0%: 0+6.5+3.9 ms clock, 0+0.50/0/0+3.9 ms cpu, 12->13->8 MB, 13 MB goal, 8 P
gc 9 @18445023415.402s 0%: 4.0+2.4+0 ms clock, 12+0/0.50/0+0 ms cpu, 16->17->8 MB, 18 MB goal, 8 P
gc 10 @18445023415.415s 0%: 0+5.9+5.9 ms clock, 0+0/0/0+11 ms cpu, 16->17->9 MB, 17 MB goal, 8 P
gc 11 @18445023415.436s 0%: 1.0+2.4+0 ms clock, 3.0+0/0/0+0 ms cpu, 17->18->11 MB, 19 MB goal, 8 P
gc 12 @18445023415.454s 0%: 1.0+1.4+0.49 ms clock, 1.0+0/0/0+0.49 ms cpu, 21->21->13 MB, 23 MB goal, 8 P
gc 13 @18445023415.467s 0%: 0.50+0+2.9 ms clock, 0.50+0/1.0/0+2.9 ms cpu, 26->26->15 MB, 27 MB goal, 8 P
gc 14 @18445023415.482s 0%: 0+0.49+0 ms clock, 0+0.99/0.99/0+0 ms cpu, 31->31->21 MB, 32 MB goal, 8 P
gc 15 @18445023415.500s 0%: 0.50+7.4+0 ms clock, 0.50+0/5.5/0+0 ms cpu, 42->44->28 MB, 43 MB goal, 8 P
gc 16 @18445023415.533s 0%: 2.4+6.9+0.50 ms clock, 2.4+0/0/0+0.50 ms cpu, 55->56->36 MB, 57 MB goal, 8 P
gc 17 @18445023415.578s 0%: 0+2.4+0 ms clock, 0+0/0/0+0 ms cpu, 70->71->47 MB, 72 MB goal, 8 P
gc 18 @18445023415.627s 0%: 0+2.0+0.49 ms clock, 0+1.0/0/0+0.49 ms cpu, 92->92->61 MB, 94 MB goal, 8 P
gc 19 @18445023415.694s 0%: 0+5.4+0.50 ms clock, 0+0/1.0/0+0.50 ms cpu, 119->119->78 MB, 122 MB goal, 8 P
gc 20 @18445023415.781s 0%: 0+5.9+0.50 ms clock, 0+0.50/1.5/0+0.50 ms cpu, 153->154->99 MB, 156 MB goal, 8 P
gc 21 @18445023415.886s 0%: 0.50+8.9+0 ms clock, 0.50+0/1.5/0+0 ms cpu, 194->195->128 MB, 199 MB goal, 8 P
gc 22 @18445023416.027s 0%: 0.50+3.9+2.5 ms clock, 1.0+0/1.9/0+5.0 ms cpu, 250->250->164 MB, 256 MB goal, 8 P
gc 23 @18445023416.195s 0%: 0.50+6.4+2.0 ms clock, 0.50+0/4.9/0+2.0 ms cpu, 320->322->214 MB, 328 MB goal, 8 P
gc 24 @18445023416.437s 0%: 3.4+9.0+0 ms clock, 10+0.50/1.4/0+0 ms cpu, 417->418->271 MB, 428 MB goal, 8 P
gc 25 @18445023416.702s 0%: 0+21+1.0 ms clock, 0+0/2.5/0+2.0 ms cpu, 529->534->355 MB, 542 MB goal, 8 P
gc 26 @18445023417.067s 0%: 2.5+10+1.0 ms clock, 5.0+0.99/1.9/0+2.0 ms cpu, 693->697->458 MB, 711 MB goal, 8 P
gc 27 @18445023417.538s 0%: 0.50+17+2.0 ms clock, 1.0+0.49/2.9/0+4.0 ms cpu, 894->901->586 MB, 917 MB goal, 8 P
gc 28 @18445023418.120s 0%: 0.50+21+3.9 ms clock, 0.50+0/3.9/0+3.9 ms cpu, 1142->1149->753 MB, 1172 MB goal, 8 P
gc 29 @18445023418.886s 0%: 0+3.9+0 ms clock, 0+0/6.0/0+0 ms cpu, 1469->1469->969 MB, 1506 MB goal, 8 P
gc 30 @18445023419.748s 0%: 0+10+0.50 ms clock, 0+0.49/7.9/0+1.0 ms cpu, 1781->1783->1222 MB, 1939 MB goal, 8 P (forced)
1 minutes has passed
2 minutes has passed
GC forced
gc 31 @18445023539.784s 0%: 0+0.99+0 ms clock, 0+0/1.4/1.9+0 ms cpu, 1234->1234->1 MB, 2445 MB goal, 8 P
scvg0: inuse: 2, idle: 1785, sys: 1787, released: 0, consumed: 17873 minutes has passed
 (MB)
4 minutes has passed
5 minutes has passed
GC forced
gc 32 @18445023719.785s 0%: 0+1.0+0 ms clock, 0+0/1.5/2.4+0 ms cpu, 1->1->1 MB, 4 MB goal, 8 P
6 minutes has passed
scvg1: 3 MB released
scvg1: inuse: 2, idle: 1785, sys: 1787, released: 3, consumed: 1784 (MB)
7 minutes has passed
8 minutes has passed
GC forced
gc 33 @18445023899.785s 0%: 0+1.0+0 ms clock, 0+0/1.0/0.50+0 ms cpu, 1->1->1 MB, 4 MB goal, 8 P
scvg2: 1686 MB released
scvg2: inuse: 2, idle: 1785, sys: 1787, released: 1689, consumed: 98 (MB)
9 minutes has passed
GC forced
10 minutes has passed
gc 34 @18445024019.905s 0%: 0+0.50+0.50 ms clock, 0+0/1.0/0+4.0 ms cpu, 1->1->1 MB, 4 MB goal, 8 P
11 minutes has passed

memoryLeak_fix.exe

gc 1 @18445023415.341s 0%: 0.50+0+0 ms clock, 2.5+0/0/0+0 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 2 @18445023415.345s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 3 @18445023415.347s 0%: 0+2.0+0 ms clock, 0+0/0/0+0 ms cpu, 4->4->2 MB, 5 MB goal, 8 P
gc 4 @18445023415.351s 0%: 4.5+1.0+0.99 ms clock, 9.0+0/0/0+1.9 ms cpu, 4->5->3 MB, 5 MB goal, 8 P
gc 5 @18445023415.360s 0%: 0.49+0.50+0 ms clock, 0.49+0/0/0+0 ms cpu, 7->7->4 MB, 8 MB goal, 8 P
gc 6 @18445023415.366s 0%: 0+0.50+0.49 ms clock, 0+0/0/0+0.49 ms cpu, 8->9->6 MB, 9 MB goal, 8 P
gc 7 @18445023415.371s 0%: 0+0.50+0.49 ms clock, 0+0/1.0/0+0.49 ms cpu, 11->11->6 MB, 12 MB goal, 8 P
gc 8 @18445023415.375s 0%: 0+0.50+0 ms clock, 0+0/0/0+0 ms cpu, 12->12->8 MB, 13 MB goal, 8 P
gc 9 @18445023415.383s 0%: 0.50+1.5+0 ms clock, 1.0+0/0.50/0+0 ms cpu, 15->17->11 MB, 16 MB goal, 8 P
gc 10 @18445023415.394s 0%: 0+2.4+0 ms clock, 0+0/0/0+0 ms cpu, 23->24->15 MB, 24 MB goal, 8 P
gc 11 @18445023415.409s 0%: 5.5+1.0+5.9 ms clock, 11+0/0.49/0+11 ms cpu, 28->29->18 MB, 30 MB goal, 8 P
gc 12 @18445023415.431s 0%: 2.9+5.5+0 ms clock, 5.9+0.49/0/0+0 ms cpu, 34->35->21 MB, 37 MB goal, 8 P
gc 13 @18445023415.459s 0%: 5.0+2.9+1.0 ms clock, 5.0+0.50/0.50/0+1.0 ms cpu, 42->42->26 MB, 43 MB goal, 8 P
gc 14 @18445023415.500s 0%: 0+0+0 ms clock, 0+0/0/0+0 ms cpu, 50->50->30 MB, 52 MB goal, 8 P
gc 15 @18445023415.532s 0%: 0.99+1.0+1.9 ms clock, 0.99+0.50/0/0+1.9 ms cpu, 60->60->39 MB, 61 MB goal, 8 P
gc 16 @18445023415.575s 0%: 0.49+2.0+0 ms clock, 0.49+0/0/0+0 ms cpu, 77->77->51 MB, 79 MB goal, 8 P
gc 17 @18445023415.634s 0%: 0+6.5+0.50 ms clock, 0+0.50/2.0/0+1.5 ms cpu, 99->99->65 MB, 102 MB goal, 8 P
gc 18 @18445023415.701s 0%: 5.0+1.4+0 ms clock, 5.0+0/0/0+0 ms cpu, 128->130->85 MB, 131 MB goal, 8 P
gc 19 @18445023415.797s 0%: 3.0+2.9+0.50 ms clock, 3.0+0.50/0.50/0+0.50 ms cpu, 166->166->104 MB, 170 MB goal, 8 P
gc 20 @18445023415.911s 0%: 4.4+2.0+0 ms clock, 4.4+0/1.0/0+0 ms cpu, 204->204->131 MB, 209 MB goal, 8 P
gc 21 @18445023416.046s 0%: 0.50+7.5+0.99 ms clock, 0.50+0/5.0/0+0.99 ms cpu, 257->258->171 MB, 263 MB goal, 8 P
gc 22 @18445023416.248s 0%: 0+5.4+0 ms clock, 0+0/4.4/0+0 ms cpu, 334->334->220 MB, 342 MB goal, 8 P
gc 23 @18445023416.479s 0%: 0.50+21+0 ms clock, 0.50+0/1.4/0+0 ms cpu, 430->436->291 MB, 441 MB goal, 8 P
gc 24 @18445023416.783s 0%: 0+1.4+5.5 ms clock, 0+0/2.9/0+5.5 ms cpu, 568->568->371 MB, 582 MB goal, 8 P
gc 25 @18445023417.139s 0%: 1.0+11+0.50 ms clock, 2.0+0.50/3.9/0+1.0 ms cpu, 724->726->477 MB, 743 MB goal, 8 P
gc 26 @18445023417.630s 0%: 0+12+11 ms clock, 0+2.4/2.9/0+23 ms cpu, 931->938->609 MB, 955 MB goal, 8 P
gc 27 @18445023418.257s 0%: 0+10+0.50 ms clock, 0+0/5.5/0+0.50 ms cpu, 1188->1190->775 MB, 1218 MB goal, 8 P
gc 28 @18445023419.034s 0%: 0.50+7.9+3.0 ms clock, 1.5+0.50/7.5/0+9.0 ms cpu, 1512->1513->1002 MB, 1550 MB goal, 8 P
gc 29 @18445023419.759s 0%: 0.50+10+2.9 ms clock, 0.50+1.0/8.5/0.50+2.9 ms cpu, 1677->1678->1210 MB, 2005 MB goal, 8 P (forced)
1 minutes has passed
2 minutes has passed
GC forced
gc 30 @18445023539.790s 0%: 0+0.99+0 ms clock, 0+0/0.99/2.4+0 ms cpu, 1221->1221->1 MB, 2421 MB goal, 8 P
scvg0: inuse: 2, idle: 1678, sys: 1680, released: 0, consumed: 1680 (MB)
3 minutes has passed
4 minutes has passed
5 minutes has passed
GC forced
gc 31 @18445023719.791s 0%: 0+0.99+0 ms clock, 0+0/1.5/0+0 ms cpu, 1->1->1 MB, 4 MB goal, 8 P
6 minutes has passed
scvg1: 8 MB released
scvg1: inuse: 2, idle: 1678, sys: 1680, released: 8, consumed: 1672 (MB)
7 minutes has passed
8 minutes has passed
GC forced
gc 32 @18445023899.792s 0%: 0+1.0+0 ms clock, 0+0/1.5/0+0 ms cpu, 1->1->1 MB, 4 MB goal, 8 P
scvg2: 1625 MB released
scvg2: inuse: 2, idle: 1678, sys: 1680, released: 1633, consumed: 47 (MB)
9 minutes has passed
GC forced
10 minutes has passed
gc 33 @18445024019.921s 0%: 0+1.0+0 ms clock, 0+0/0/1.0+0 ms cpu, 1->1->1 MB, 4 MB goal, 8 P
11 minutes has passed

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 31, 2017

The fix is only marginally better.

I suggest you use VMMap to see where all that memory goes. Please report back.

I'm seeing it vary between 70 MB and 1.5 GB of memory. I can only get to 1.5 GB of memory if I run two of the memoryLeak.exe at the same time.

I do not know know why. Use VMMap to answer all these questions. When I run your program here, I saw: 1) memory allocated by Go gc; 2) memory allocated by Windows thread stacks. 1 should go away (I saw it go away) once gc reports "scvg2: 1625 MB released". 2 should be much smaller (see "committed" column) once you apply CL 74490 - I see about 16-20K per thread.

Interestingly, today Go is eventually cleaning up after itself, but not until somewhere between 8 and 10 minutes.

Yes, that is what I saw too. So once you apply CL 74490 and wait for 10 minutes, most of your memory should be free.

Is there a way I can trick the GC to clean up sooner? I put runtime.GC() in the code, but it doesn't seem to do anything.

I do not know, but others might help. But reading source code, I can see that runtime.GC() does not release memory to OS, it keeps it handy until scavenger decides that it should be released to OS.

It appears that the scavenger doesn't notice that it has an extra 1686 and 1625 MB of memory until 8-10 minutes into program execution. Once it reports this extra memory, the OS dutifully cleans up. Why does Go keep such a messy room for so long?

I guess, Go gc assumes that program might reuse memory, so it keeps it around for a few minutes. Your program uses a lot of memory at the start, and than sits and does nothing till the end - normal programs do not do that.

Here is the full gctrace for the memoryLeak.exe and memoryLeak_fix.exe (compiled using the recent fix). I ran these concurrent with one another from the command Line.

I would like you to use VMMap to see how much of all "private" memory there left after 10 minutes. And compare that with the same figure before CL 74490.

Alex

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

commented Oct 31, 2017

You can clean up sooner by calling https://golang.org/pkg/runtime/debug/#FreeOSMemory .

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 31, 2017

Interestingly, today Go is eventually cleaning up after itself, but not until somewhere between 8 and 10 minutes.

That means we've at least fixed the thread stacks problem because those wouldn't be cleaned up by the scavenger.

Is there a way I can trick the GC to clean up sooner?

debug.FreeOSMemory

Why does Go keep such a messy room for so long?

There are a few reasons: it costs to release memory and later re-acquire it from the OS, applications don't typically have spikes like this (though certainly plenty do), and if you're running in a server/container environment you often have memory dedicated to your job so it's pointless to return the memory.

That said, I think this is mostly wrong. :) Rewriting the scavenger and releasing memory much more aggressively has been on my list of things to do for a while, but it doesn't actually come up all that often so it hasn't been a high priority.

@ghost

This comment has been minimized.

Copy link

commented Oct 31, 2017

@alexbrainman this is a "normal" program even if the test isn't. The application I'm building is like a job scheduler. My program runs a web server which kicks off background workers to perform work on request. Prior to becoming a Gopher, I would've written this in C# and I wouldn't be having this problem. C# schedules background workers as separate threads which terminate when they complete processing. In C# when the processing is done, all the memory would be cleaned up when the threads are killed. In Go, we don't have that luxury so that's all the more reason why Garbage Collection and the scavenger must be good at cleaning up after themselves. I've downloaded VMMap and will run it and report back later today. BTW, I've also made a small modification to my test program so that it's more "realistic". It now kicks off another background job every minute and now Go never releases the memory.

package main

import (
	"bytes"
	"fmt"
	"io/ioutil"
	"math/rand"
	"time"
)

func main() {
	for i := 1; i < 11; i++ {
		for j := 0; j < 1000; j++ {
			time.Sleep(time.Millisecond * 1)
			go fakeGetAndSaveData()
		}
		time.Sleep(1 * time.Minute)
		fmt.Printf("%d minutes has passed\n", i)
	}
}

func fakeGetAndSaveData() {
	var buf bytes.Buffer
	for i := 0; i < 40000; i++ {
		buf.WriteString(fmt.Sprintf("the number is %d\n", i))
	}

	ioutil.WriteFile(fmt.Sprintf("%d.txt", rand.Int()), buf.Bytes(), 0644)
}

Is this the desired behavior? In the world of CPU and memory, 1 minute is a long time. 5 minutes seems like forever and 8-10 minutes feels like an eternity. 8 minutes on a quad core 4 GHz processor is 7.7 trillion operations later. Having to have a program be completely idle for 8+ minutes before cleaning up seems way too long to me.

@aclements

This comment has been minimized.

Copy link
Member

commented Oct 31, 2017

Is this the desired behavior?

No, it's not. But see my earlier comment.

However, if your application's heap size is going to be spiking regularly, do you have something else running that would be able to take advantage of that idle memory just between those spikes? This resource management problem is bigger than a single Go process releasing memory.

@alexbrainman

This comment has been minimized.

Copy link
Member

commented Oct 31, 2017

@alexbrainman this is a "normal" program even if the test isn't.

The code you have shown us should use very little memory after you apply CL 74490 and wait for 8-10 minutes. If you see differently, please use VMMap, and tell us where the memory is.

My program runs a web server which kicks off background workers to perform work on request. Prior to becoming a Gopher, I would've written this in C# and I wouldn't be having this problem.

It is difficult to speculate that your problem is. Please, show your code and we will investigate (perhaps create new issue).

Thank you

Alex

@ghost

This comment has been minimized.

Copy link

commented Nov 2, 2017

Thanks @aclements and @alexbrainman . Your comments have caused me to think differently about this problem. I've never had such a massively parallel program as the one I'm working on and I realize after messing with this little toy project that I was thinking of memory incorrectly. I may not have any "leaks" in my code, but if I have 1000 things all asking for memory at the same time, Go will dutifully request 1000 different small pieces of memory at the same time which will lead to a lot of memory usage. Such a program that requests a lot of data in parallel just takes more resources than I anticipated. The fact that Go holds on to the memory for so long was confusing, but I understand why now thanks to @aclements comments. I do look forward to the changes @aclements plans to make to the scavenger though and I'd love to test those when ready.

@gopherbot gopherbot closed this in 923299a Nov 3, 2017

@golang golang locked and limited conversation to collaborators Nov 3, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.