Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (
@ysmolsky We experienced similar sized pauses during the mark phase. We tested heaps of size
We saw a lot of 5-20ms pauses (where pauses are measured as a period where no network events are recorded in the trace). The larger heaps sizes saw a greater performance impact, but it looks like that was simply because the mark phase ran for much longer.
In particular we saw huge number of Idle-GC slices in the trace. If were to guess I would say the scheduler was struggling to schedule all of the Idle-GC goroutines as well and schedule meaningful work.
If you request access to the google doc, I will grant it as soon as I can.
We saw no Mark assists in the trace. I specifically looked for them, because that was my first thought.
With the way we forced the GC to run during the trace makes assists even less likely. Because an early GC run is less likely to feel like the mutators are outpacing it and ask for assists (is my understanding).
The unusual feature of this test that the system was running on a machine with 48 cores with GOMAXPROCS unset. It looks like it was trying to use all the cores during GC, but most GC slices are idle.
The blog post has been published
If anything written there is incorrect I am happy to make edits.
Should be easier to access than the google doc.