Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upGOGC=40 default causes performance issues #2665
Comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
I did those tests with 1.6 before releasing. Obviously, the CPU load increased as well. I guess, if you are CPU bound, this will negatively effect queries, which will certainly be true for some users. However, in 1.x, everything that is managed by the page cache in 2.x is on the heap, so the effect on better memory utilization is huge. (I'm planning to post something about it.) We should definitely document clearly the option to tweak GOGC. |
This comment has been minimized.
This comment has been minimized.
|
In case it was ambiguous before, both of the servers graphed here at 1.6.1.
…On Sat, Apr 29, 2017, 2:02 AM Björn Rabenstein ***@***.***> wrote:
I did those tests with 1.6 before releasing. Obviously, the CPU load
increased as well. I guess, if you are CPU bound, this will negatively
effect queries, which will certainly be true for some users. However, in
1.x, everything that is managed by the page cache in 2.x is on the heap, so
the effect on better memory utilization is huge. (I'm planning to post
something about it.)
We should definitely document clearly the option to tweak GOGC.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2665 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEuA8tOdIppsr5lKffjbGqJ83sjRo88vks5r0n4OgaJpZM4NL4_Y>
.
|
This comment has been minimized.
This comment has been minimized.
Gotcha! That puts things into perspective. I'll leave my thoughts here in a few moments. (Sorry for delays, caught in conference and jet lag...) |
This comment has been minimized.
This comment has been minimized.
|
Here my thoughts: The graphs posted above are for test servers that haven't reached steady state yet, i.e. the heap size is still far away from the configured target heap size, and also fairly small at around 1.5GiB reached at the end of the test run. Also,the heap size graph is using the HeapInuse bytes, while the graphs in #2528 use HeapAlloc and RSS. This has a number of implications:
In summary: Should you ever be CPU-bound instead of memory-bound, you should increase GOGC. But that's even true for GOGC=100. As said, the rationale really boils down to re-creating the usual state with predominantly short-lived heap allocations. But that rationale only holds if 60% of the heap allocations are long-lived, i.e. only in steady state. |
This comment has been minimized.
This comment has been minimized.
|
Another point: The first graph above plots |
beorn7
referenced this issue
May 4, 2017
Closed
Operating: Document effects of higher GOGC values #730
RichiH
referenced this issue
May 5, 2017
Merged
content/docs/operating/storage.md: Clarify GOGC #731
This comment has been minimized.
This comment has been minimized.
brian-brazil
added
priority/Pmaybe
component/local storage
labels
Jul 14, 2017
This comment has been minimized.
This comment has been minimized.
|
With the 2.0.0 release imminent, I don't think it's worth to invest more research into this topic. |
beorn7
closed this
Nov 6, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |


fabxc commentedApr 28, 2017
•
edited
So the GOGC=40 default made it into v2.0.0-alpha.0 by accident. I removed it again but forgot about it. What followed was a 6h journey of figuring out why my seemingly irrelevant changes in dev-2.0 made it perform a lot better than v2.0.0-alpha.0 in prombench.
Eventually I remembered the GOGC setting.
I thought it would be valuable to verify whether this is a 2.0 specific thing or not. I just my typical prombench setup that also tests the read path.
Two v1.6.1 servers with 10GB target heap size we run, one with GOGC=40 one with GOGC=100.
Even after turning queries off, the memory savings stayed about the same but so did the increased CPU load, even if the ratio shrank a little bit.
The graphs lead me to believe that the drawbacks of increased CPU load and query latency outshine the slight memory savings by a large margin. We might want to consider setting it back to the default and instead document that users can adjust it themselves if they are willing to accept the tradeoffs.
The results reported in #2528 suggest something very different. I'm not quite sure what causes the large difference. The test deployment is by no means crazy at about 80k samples/sec and the ratios did not change after pod scaling happened.