runtime: eliminate stack rescanning #17503

Open
aclements opened this Issue Oct 18, 2016 · 40 comments

Comments

Projects
None yet
7 participants
Member

aclements commented Oct 18, 2016 edited

One of the largest remaining contributors to GC STW time is stack rescanning. I have an approach for eliminating this entirely. This is a tracking bug for implementing this approach.

I will upload a design document and proof soon, and I have a working implementation that I plan to have cleaned up and mailed out in a day or two.

I'm marking this Go 1.9. My current plan is to get the change in for Go 1.8, but have a GODEBUG flag to fall back to the current algorithm for debugging purposes (and in case something goes wrong). Assuming things go smoothly, we'll actually rip out the stack rescanning code when Go 1.9 opens.

Edit: Design doc

Edit: Things to follow up on in Go 1.9/1.10:

  • Remove stack rescanning [done]
  • Remove (or replace) stack barriers and delete TestStackBarrierProfiling [done]
  • Remove debug.gcrescanstacks
  • Fix early mark termination race
  • Remove work draining from mark termination and work.helperDrainBlock
  • Revisit 100us wait in stopTheWorldWithSema
  • Revisit making the second shade conditional (and the condition for channel ops)

/cc @RLH

aclements added this to the Go1.9Early milestone Oct 18, 2016

aclements self-assigned this Oct 18, 2016

CL https://golang.org/cl/31362 mentions this issue.

CL https://golang.org/cl/31450 mentions this issue.

CL https://golang.org/cl/31451 mentions this issue.

CL https://golang.org/cl/31369 mentions this issue.

CL https://golang.org/cl/31453 mentions this issue.

CL https://golang.org/cl/31452 mentions this issue.

CL https://golang.org/cl/31457 mentions this issue.

CL https://golang.org/cl/31454 mentions this issue.

CL https://golang.org/cl/31367 mentions this issue.

CL https://golang.org/cl/31368 mentions this issue.

CL https://golang.org/cl/31456 mentions this issue.

CL https://golang.org/cl/31455 mentions this issue.

CL https://golang.org/cl/31366 mentions this issue.

CL https://golang.org/cl/31550 mentions this issue.

CL https://golang.org/cl/31572 mentions this issue.

CL https://golang.org/cl/31570 mentions this issue.

CL https://golang.org/cl/31571 mentions this issue.

CL https://golang.org/cl/31655 mentions this issue.

We used the "double barrier" to address stack scanning issues in the IBM J9 implementation of Metronome. It's in section 4.3 of "Design and implementation of a comprehensive real-time java virtual machine" by Auerbach et al, section 4.3. In that case we were incrementalizing over many thread's stacks, as opposed to the individual stack, but it solves the same problem.

It worked well, although the extra barrier overhead was annoying. Let me know if you have any questions.

david (dfb@google.com)

Member

aclements commented Oct 21, 2016

@davidfbacon, thanks for the reference! Indeed, that looks like the same barrier design. It's good to know that it worked well for Metronome.

I'll update the proposal document to add a citation.

@gopherbot gopherbot pushed a commit to golang/proposal that referenced this issue Oct 21, 2016

@aclements aclements design: design doc for eliminating stack re-scanning
Updates golang/go#17503.

Change-Id: Ib635a49f9fde36493ba98a6d87b9c1dd114e0c7d
Reviewed-on: https://go-review.googlesource.com/31362
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
a1bce41
Contributor

rasky commented Oct 22, 2016

@aclements any numbers on the performance impact of the new barrier? Also, are they harder to eliminate at compile time?

Member

aclements commented Oct 23, 2016

@rasky, it's about a 1.7% performance hit on the x/benchmarks garbage benchmark (which, as the name suggests, is designed to hammer the garbage collector). I haven't checked, but I suspect we've gained more than that from other optimizations since Go 1.7.

They are harder to eliminate at compile time. I completely disabled the optimizations that don't carry over directly to the hybrid barrier and binaries got about 1% larger. We could eliminate some of these, but it requires flow analysis and the current insertion code doesn't do any flow analysis. OTOH, the places where we can eliminate write barriers with the current write barrier aren't all that common anyway (how often do you write the address of a global to something?), so we're not losing much.

@gopherbot gopherbot pushed a commit that referenced this issue Oct 24, 2016

@aclements aclements runtime: make morestack less subtle
morestack writes the context pointer to gobuf.ctxt, but since
morestack is written in assembly (and has to be very careful with
state), it does *not* invoke the requisite write barrier for this
write. Instead, we patch this up later, in newstack, where we invoke
an explicit write barrier for ctxt.

This already requires some subtle reasoning, and it's going to get a
lot hairier with the hybrid barrier.

Fix this by simplifying the whole mechanism. Instead of writing
gobuf.ctxt in morestack, just pass the value of the context register
to newstack and let it write it to gobuf.ctxt. This is a normal Go
pointer write, so it gets the normal Go write barrier. No subtle
reasoning required.

Updates #17503.

Change-Id: Ia6bf8459bfefc6828f53682ade32c02412e4db63
Reviewed-on: https://go-review.googlesource.com/31550
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
bf9c71c

CL https://golang.org/cl/31764 mentions this issue.

CL https://golang.org/cl/31766 mentions this issue.

CL https://golang.org/cl/31765 mentions this issue.

CL https://golang.org/cl/31763 mentions this issue.

CL https://golang.org/cl/31820 mentions this issue.

CL https://golang.org/cl/31890 mentions this issue.

Contributor

RLH commented Oct 25, 2016

Thanks for the clue. The design doc has been updated. It's comforting to
know that this path has been traveled and no gotchas materialized.

On Fri, Oct 21, 2016 at 12:23 PM, Austin Clements notifications@github.com
wrote:

@davidfbacon https://github.com/davidfbacon, thanks for the reference!
Indeed, that looks like the same barrier design. It's good to know that it
worked well for Metronome.

I'll update the proposal document to add a citation.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#17503 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA7Wn1yxx5Gh26g2QaeCeflflTu5vCpdks5q2Ob-gaJpZM4KZ1aw
.

CL https://golang.org/cl/32033 mentions this issue.

Contributor

cherrymui commented Oct 26, 2016

Thank you for the design doc. It looks really great!

For channel operations, the shade(ptr) is necessary if either the source stack or the destination stack is grey.

I'm wondering whether this is necessary.

  • If the source stack is black, ptr cannot be an unprotected white pointer, so there is no need to shade it.
  • If the destination stack is grey, ptr will be scanned eventually when the destination stack becoming black (or ptr becomes dead before it).

So maybe we only need to shade(ptr) when the source stack is grey and the destination stack is black. Of course it is ok to just conservatively shade it in Go 1.8.

@gopherbot gopherbot pushed a commit that referenced this issue Oct 26, 2016

@aclements aclements runtime: simplify reflectcall write barriers
Currently reflectcall has a subtle dance with write barriers where the
assembly code copies the result values from the stack to the in-heap
argument frame without write barriers and then calls into the runtime
after the fact to invoke the necessary write barriers.

For the hybrid barrier (and for ROC), we need to switch to a
*pre*-write write barrier, which is very difficult to do with the
current setup. We could tie ourselves in knots of subtle reasoning
about why it's okay in this particular case to have a post-write write
barrier, but this commit instead takes a different approach. Rather
than making things more complex, this simplifies reflection calls so
that the argument copy is done in Go using normal bulk write barriers.

The one difficulty with this approach is that calling into Go requires
putting arguments on the stack, but the call* functions "donate" their
entire stack frame to the called function. We can get away with this
now because the copy avoids using the stack and has copied the results
out before we clobber the stack frame to call into the write barrier.
The solution in this CL is to call another function, passing arguments
in registers instead of on the stack, and let that other function
reserve more stack space and setup the arguments for the runtime.

This approach seemed to work out the best. I also tried making the
call* functions reserve 32 extra bytes of frame for the write barrier
arguments and adjust SP up by 32 bytes around the call. However, even
with the necessary changes to the assembler to correct the spdelta
table, the runtime was still having trouble with the frame layout (and
the changes to the assembler caused many other things that do strange
things with the SP to fail to assemble). The approach I took doesn't
require any funny business with the SP.

Updates #17503.

Change-Id: Ie2bb0084b24d6cff38b5afb218b9e0534ad2119e
Reviewed-on: https://go-review.googlesource.com/31655
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
79561a8

@gopherbot gopherbot pushed a commit that referenced this issue Oct 26, 2016

@aclements aclements runtime: scan mark worker stacks like normal
Currently, markroot delays scanning mark worker stacks until mark
termination by putting the mark worker G directly on the rescan list
when it encounters one during the mark phase. Without this, since mark
workers are non-preemptible, two mark workers that attempt to scan
each other's stacks can deadlock.

However, this is annoyingly asymmetric and causes some real problems.
First, markroot does not own the G at that point, so it's not
technically safe to add it to the rescan list. I haven't been able to
find a specific problem this could cause, but I suspect it's the root
cause of issue #17099. Second, this will interfere with the hybrid
barrier, since there is no stack rescanning during mark termination
with the hybrid barrier.

This commit switches to a different approach. We move the mark
worker's call to gcDrain to the system stack and set the mark worker's
status to _Gwaiting for the duration of the drain to indicate that
it's preemptible. This lets another mark worker scan its G stack while
the drain is running on the system stack. We don't return to the G
stack until we can switch back to _Grunning, which ensures we don't
race with a stack scan. This lets us eliminate the special case for
mark worker stack scans and scan them just like any other goroutine.
The only subtlety to this approach is that we have to disable stack
shrinking for mark workers; they could be referring to captured
variables from the G stack, so it's not safe to move their stacks.

Updates #17099 and #17503.

Change-Id: Ia5213949ec470af63e24dfce01df357c12adbbea
Reviewed-on: https://go-review.googlesource.com/31820
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
d6625ca

CL https://golang.org/cl/32093 mentions this issue.

CL https://golang.org/cl/32095 mentions this issue.

CL https://golang.org/cl/32186 mentions this issue.

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: parallelize STW mcache flushing
Currently all mcaches are flushed in a single STW root job. This takes
about 5 µs per P, but since it's done sequentially it adds about
5*GOMAXPROCS µs to the STW.

Fix this by parallelizing the job. Since there are exactly GOMAXPROCS
mcaches to flush, this parallelizes quite nicely and brings the STW
latency cost down to a constant 5 µs (assuming GOMAXPROCS actually
reflects the number of CPUs).

Updates #17503.

Change-Id: Ibefeb1c2229975d5137c6e67fac3b6c92103742d
Reviewed-on: https://go-review.googlesource.com/32033
Reviewed-by: Rick Hudson <rlh@golang.org>
a475a38

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: use typedmemclr for typed memory
The hybrid barrier requires distinguishing typed and untyped memory
even when zeroing because the *current* contents of the memory matters
even when overwriting.

This commit introduces runtime.typedmemclr and runtime.memclrHasPointers
as a typed memory clearing functions parallel to runtime.typedmemmove.
Currently these simply call memclr, but with the hybrid barrier we'll
need to shade any pointers we're overwriting. These will provide us
with the necessary hooks to do so.

Updates #17503.

Change-Id: I74478619f8907825898092aaa204d6e4690f27e6
Reviewed-on: https://go-review.googlesource.com/31366
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
aa581f5

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: make _MSpanDead be the zero state
Currently the zero value of an mspan is in the "in use" state. This
seems like a bad idea in general. But it's going to wreak havoc when
we make fixalloc zero allocations: even "freed" mspan objects are
still on the allspans list and still get looked at by the garbage
collector. Hence, if we leave the mspan states the way they are,
allocating a span that reuses old memory will temporarily pass that
span (which is visible to GC!) through the "in use" state, which can
cause "unswept span" panics.

Fix all of this by making the zero state "dead".

Updates #17503.

Change-Id: I77c7ac06e297af4b9e6258bc091c37abe102acc3
Reviewed-on: https://go-review.googlesource.com/31367
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
f4dcc9b

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: make fixalloc zero allocations on reuse
Currently fixalloc does not zero memory it reuses. This is dangerous
with the hybrid barrier if the type may contain heap pointers, since
it may cause us to observe a dead heap pointer on reuse. It's also
error-prone since it's the only allocator that doesn't zero on
allocation (mallocgc of course zeroes, but so do persistentalloc and
sysAlloc). It's also largely pointless: for mcache, the caller
immediately memclrs the allocation; and the two specials types are
tiny so there's no real cost to zeroing them.

Change fixalloc to zero allocations by default.

The only type we don't zero by default is mspan. This actually
requires that the spsn's sweepgen survive across freeing and
reallocating a span. If we were to zero it, the following race would
be possible:

1. The current sweepgen is 2. Span s is on the unswept list.

2. Direct sweeping sweeps span s, finds it's all free, and releases s
   to the fixalloc.

3. Thread 1 allocates s from fixalloc. Suppose this zeros s, including
   s.sweepgen.

4. Thread 1 calls s.init, which sets s.state to _MSpanDead.

5. On thread 2, background sweeping comes across span s in allspans
   and cas's s.sweepgen from 0 (sg-2) to 1 (sg-1). Now it thinks it
   owns it for sweeping. 6. Thread 1 continues initializing s.
   Everything breaks.

I would like to fix this because it's obviously confusing, but it's a
subtle enough problem that I'm leaving it alone for now. The solution
may be to skip sweepgen 0, but then we have to think about wrap-around
much more carefully.

Updates #17503.

Change-Id: Ie08691feed3abbb06a31381b94beb0a2e36a0613
Reviewed-on: https://go-review.googlesource.com/31368
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
ae3bb4a

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime, cmd/compile: rename memclr -> memclrNoHeapPointers
Since barrier-less memclr is only safe in very narrow circumstances,
this commit renames memclr to avoid accidentally calling memclr on
typed memory. This can cause subtle, non-deterministic bugs, so it's
worth some effort to prevent. In the near term, this will also prevent
bugs creeping in from any concurrent CLs that add calls to memclr; if
this happens, whichever patch hits master second will fail to compile.

This also adds the other new memclr variants to the compiler's
builtin.go to minimize the churn on that binary blob. We'll use these
in future commits.

Updates #17503.

Change-Id: I00eead049f5bd35ca107ea525966831f3d1ed9ca
Reviewed-on: https://go-review.googlesource.com/31369
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
87e48c5

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements cmd/compile: lower slice clears to memclrHasPointers
If a slice's backing store has pointers, we need to lower clears of
that slice to memclrHasPointers instead of memclrNoHeapPointers.

Updates #17503.

Change-Id: I20750e4bf57f7b8862f3d898bfb32d964b91d07b
Reviewed-on: https://go-review.googlesource.com/31450
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
58e2eda

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements cmd/compile: use typedmemclr for zeroing if there are pointers
Currently, zeroing generates an ssa.OpZero, which never has write
barriers, even if the assignment is an OASWB. The hybrid barrier
requires write barriers on zeroing, so change OASWB to generate an
ssa.OpZeroWB when assigning the zero value, which turns into a
typedmemclr.

Updates #17503.

Change-Id: Ib37ac5e39f578447dbd6b36a6a54117d5624784d
Reviewed-on: https://go-review.googlesource.com/31451
Reviewed-by: Cherry Zhang <cherryyz@google.com>
8a7f0ad

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: document rules about unmanaged memory
Updates #17503.

Change-Id: I109d8742358ae983fdff3f3dbb7136973e81f4c3
Reviewed-on: https://go-review.googlesource.com/31452
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
c5e7006

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: avoid write barriers to uninitialized finalizer frame memory
runfinq allocates a stack frame on the heap for constructing the
finalizer function calls and reuses it for each call. However, because
the type of this frame is constantly shifting, it tells mallocgc there
are no pointers in it and it acts essentially like uninitialized
memory between uses. But runfinq uses pointer writes with write
barriers to "initialize" this memory, which is not going to be safe
with the hybrid barrier, since the hybrid barrier may see a stale
pointer left in the "uninitialized" frame.

Fix this by zero-initializing the argument values in the frame before
writing the argument pointers.

Updates #17503.

Change-Id: I951c0a2be427eb9082a32d65c4410e6fdef041be
Reviewed-on: https://go-review.googlesource.com/31453
Reviewed-by: Rick Hudson <rlh@golang.org>
db56a63

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: ensure finalizers are zero-initialized before reuse
We reuse finalizers in finblocks, which are allocated off-heap. This
means they have to be zero-initialized before becoming visible to the
garbage collector. We actually already do this by clearing the
finalizer before returning it to the pool, but we're not careful to
enforce correct memory ordering. Fix this by manipulating the
finalizer count atomically so these writes synchronize properly with
the garbage collector.

Updates #17503.

Change-Id: I7797d31df3c656c9fe654bc6da287f66a9e2037d
Reviewed-on: https://go-review.googlesource.com/31454
Reviewed-by: Rick Hudson <rlh@golang.org>
d3836ab

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: zero-initialize LR on new stacks
Currently we initialize LR on a new stack by writing nil to it. But
this is an initializing write since the newly allocated stack is not
zeroed, so this is unsafe with the hybrid barrier. Change this is a
uintptr write to avoid a bad write barrier.

Updates #17503.

Change-Id: I062ac352e35df7da4644c1f2a5aaab87049d1f60
Reviewed-on: https://go-review.googlesource.com/32093
Reviewed-by: Rick Hudson <rlh@golang.org>
88518e7

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: shade stack-to-stack copy when starting a goroutine
The hybrid barrier requires barriers on stack-to-stack copies if
either stack is grey. There are only two instances of this in the
runtime: channel sends and starting a goroutine. Channel sends already
use typedmemmove and hence have the necessary barriers. This commits
adds barriers for the stack-to-stack copy when starting a goroutine.

Updates #17503.

Change-Id: Ibb55e08127ca4d021ac54be61cb96732efa5df5b
Reviewed-on: https://go-review.googlesource.com/31455
Reviewed-by: Rick Hudson <rlh@golang.org>
ee785f0

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: mark tiny blocks at GC start
The hybrid barrier requires allocate-black, but there's one case where
we don't currently allocate black: the tiny allocator. If we allocate
a *new* tiny alloc block during GC, it will be allocated black, but if
we allocated the current block before GC, it won't be black, and the
further allocations from it won't mark it, which means we may free a
reachable tiny block during sweeping.

Fix this by passing over all mcaches at the beginning of mark, while
the world is still stopped, and greying their tiny blocks.

Updates #17503.

Change-Id: I04d4df7cc2f553f8f7b1e4cb0b52e2946588111a
Reviewed-on: https://go-review.googlesource.com/31456
Reviewed-by: Rick Hudson <rlh@golang.org>
85c22bc

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: eliminate write barriers from dropg
Currently this contains no write barriers because it's writing nil
pointers, but with the hybrid barrier, even these will produce write
barriers. However, since these are *gs and *ms, they don't need write
barriers, so we can simply eliminate them.

Updates #17503.

Change-Id: Ib188a60492c5cfb352814bf9b2bcb2941fb7d6c0
Reviewed-on: https://go-review.googlesource.com/31570
Reviewed-by: Rick Hudson <rlh@golang.org>
8044b77

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: eliminate write barriers from save
As for dropg, save is writing a nil pointer that will generate a write
barrier with the hybrid barrier. However, in this case, ctxt always
should already be nil, so replace the write with an assertion that
this is the case.

At this point, we're ready to disable the write barrier elision
optimizations that interfere with the hybrid barrier.

Updates #17503.

Change-Id: I83208e65aa33403d442401f355b2e013ab9a50e9
Reviewed-on: https://go-review.googlesource.com/31571
Reviewed-by: Rick Hudson <rlh@golang.org>
c3163d2

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements cmd/compile: disable various write barrier optimizations
Several of our current write barrier elision optimizations are invalid
with the hybrid barrier. Eliding the hybrid barrier requires that
*both* the current and new pointer be already shaded and, since we
don't have the flow analysis to figure out anything about the slot's
current value, for now we have to just disable several of these
optimizations.

This has a slight impact on binary size. On linux/amd64, the go tool
binary increases by 0.7% and the compile binary increases by 1.5%.

It also has a slight impact on performance, as one would expect. We'll
win some of this back in subsequent commits.

name                      old time/op    new time/op    delta
BinaryTree17-12              2.38s ± 1%     2.40s ± 1%  +0.82%  (p=0.000 n=18+20)
Fannkuch11-12                2.84s ± 1%     2.70s ± 0%  -4.97%  (p=0.000 n=18+18)
FmtFprintfEmpty-12          44.2ns ± 1%    46.4ns ± 2%  +4.89%  (p=0.000 n=16+18)
FmtFprintfString-12          131ns ± 0%     134ns ± 1%  +2.05%  (p=0.000 n=12+19)
FmtFprintfInt-12             114ns ± 1%     117ns ± 1%  +3.26%  (p=0.000 n=19+20)
FmtFprintfIntInt-12          176ns ± 1%     181ns ± 1%  +3.25%  (p=0.000 n=20+20)
FmtFprintfPrefixedInt-12     185ns ± 1%     190ns ± 1%  +2.77%  (p=0.000 n=19+18)
FmtFprintfFloat-12           249ns ± 1%     254ns ± 1%  +1.71%  (p=0.000 n=18+20)
FmtManyArgs-12               747ns ± 1%     743ns ± 1%  -0.58%  (p=0.000 n=19+18)
GobDecode-12                6.57ms ± 1%    6.61ms ± 0%  +0.73%  (p=0.000 n=19+20)
GobEncode-12                5.58ms ± 1%    5.60ms ± 0%  +0.27%  (p=0.001 n=18+18)
Gzip-12                      223ms ± 1%     223ms ± 1%    ~     (p=0.351 n=19+20)
Gunzip-12                   37.9ms ± 0%    37.9ms ± 1%    ~     (p=0.095 n=16+20)
HTTPClientServer-12         77.8µs ± 1%    78.5µs ± 1%  +0.97%  (p=0.000 n=19+20)
JSONEncode-12               14.8ms ± 1%    14.8ms ± 1%    ~     (p=0.079 n=20+19)
JSONDecode-12               53.7ms ± 1%    54.2ms ± 1%  +0.92%  (p=0.000 n=20+19)
Mandelbrot200-12            3.81ms ± 1%    3.81ms ± 0%    ~     (p=0.916 n=19+18)
GoParse-12                  3.19ms ± 1%    3.19ms ± 1%    ~     (p=0.175 n=20+19)
RegexpMatchEasy0_32-12      71.9ns ± 1%    70.6ns ± 1%  -1.87%  (p=0.000 n=19+20)
RegexpMatchEasy0_1K-12       946ns ± 0%     944ns ± 0%  -0.22%  (p=0.000 n=19+16)
RegexpMatchEasy1_32-12      67.3ns ± 2%    66.8ns ± 1%  -0.72%  (p=0.008 n=20+20)
RegexpMatchEasy1_1K-12       374ns ± 1%     384ns ± 1%  +2.69%  (p=0.000 n=18+20)
RegexpMatchMedium_32-12      107ns ± 1%     107ns ± 1%    ~     (p=1.000 n=20+20)
RegexpMatchMedium_1K-12     34.3µs ± 1%    34.6µs ± 1%  +0.90%  (p=0.000 n=20+20)
RegexpMatchHard_32-12       1.78µs ± 1%    1.80µs ± 1%  +1.45%  (p=0.000 n=20+19)
RegexpMatchHard_1K-12       53.6µs ± 0%    54.5µs ± 1%  +1.52%  (p=0.000 n=19+18)
Revcomp-12                   417ms ± 5%     391ms ± 1%  -6.42%  (p=0.000 n=16+19)
Template-12                 61.1ms ± 1%    64.2ms ± 0%  +5.07%  (p=0.000 n=19+20)
TimeParse-12                 302ns ± 1%     305ns ± 1%  +0.90%  (p=0.000 n=18+18)
TimeFormat-12                319ns ± 1%     315ns ± 1%  -1.25%  (p=0.000 n=18+18)
[Geo mean]                  54.0µs         54.3µs       +0.58%

name         old time/op  new time/op  delta
XGarbage-12  2.24ms ± 2%  2.28ms ± 1%  +1.68%  (p=0.000 n=18+17)
XHTTP-12     11.4µs ± 1%  11.6µs ± 2%  +1.63%  (p=0.000 n=18+18)
XJSON-12     11.6ms ± 0%  12.5ms ± 0%  +7.84%  (p=0.000 n=18+17)

Updates #17503.

Change-Id: I1899f8e35662971e24bf692b517dfbe2b533c00c
Reviewed-on: https://go-review.googlesource.com/31572
Reviewed-by: Keith Randall <khr@golang.org>
c39918a

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: perform write barrier before pointer write
Currently, we perform write barriers after performing pointer writes.
At the moment, it simply doesn't matter what order this happens in, as
long as they appear atomic to GC. But both the hybrid barrier and ROC
are going to require a pre-write write barrier.

For the hybrid barrier, this is important because the barrier needs to
observe both the current value of the slot and the value that will be
written to it. (Alternatively, the caller could do the write and pass
in the old value, but it seems easier and more useful to just swap the
order of the barrier and the write.)

For ROC, this is necessary because, if the pointer write is going to
make the pointer reachable to some goroutine that it currently is not
visible to, the garbage collector must take some special action before
that pointer becomes more broadly visible.

This commits swaps pointer writes around so the write barrier occurs
before the pointer write.

The main subtlety here is bulk memory writes. Currently, these copy to
the destination first and then use the pointer bitmap of the
destination to find the copied pointers and invoke the write barrier.
This is necessary because the source may not have a pointer bitmap. To
handle these, we pass both the source and the destination to the bulk
memory barrier, which uses the pointer bitmap of the destination, but
reads the pointer values from the source.

Updates #17503.

Change-Id: I78ecc0c5c94ee81c29019c305b3d232069294a55
Reviewed-on: https://go-review.googlesource.com/31763
Reviewed-by: Rick Hudson <rlh@golang.org>
8f81dfe

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: add deletion barriers on gobuf.ctxt
gobuf.ctxt is set to nil from many places in assembly code and these
assignments require write barriers with the hybrid barrier.

Conveniently, in most of these places ctxt should already be nil, in
which case we don't need the barrier. This commit changes these places
to assert that ctxt is already nil.

gogo is more complicated, since ctxt may not already be nil. For gogo,
we manually perform the write barrier if ctxt is not nil.

Updates #17503.

Change-Id: I9d75e27c75a1b7f8b715ad112fc5d45ffa856d30
Reviewed-on: https://go-review.googlesource.com/31764
Reviewed-by: Cherry Zhang <cherryyz@google.com>
70c107c

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: remove unnecessary step from bulkBarrierPreWrite
Currently bulkBarrierPreWrite calls writebarrierptr_prewrite, but this
means that we check writeBarrier.needed twice and perform cgo checks
twice.

Change bulkBarrierPreWrite to call writebarrierptr_prewrite1 to skip
over these duplicate checks.

This may speed up bulkBarrierPreWrite slightly, but mostly this will
save us from running out of nosplit stack space on ppc64x in the near
future.

Updates #17503.

Change-Id: I1cea1a2207e884ab1a279c6a5e378dcdc048b63e
Reviewed-on: https://go-review.googlesource.com/31890
Reviewed-by: Rick Hudson <rlh@golang.org>
d825682

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: avoid getfull() barrier most of the time
With the hybrid barrier, unless we're doing a STW GC or hit a very
rare race (~once per all.bash) that can start mark termination before
all of the work is drained, we don't need to drain the work queue at
all. Even draining an empty work queue is rather expensive since we
have to enter the getfull() barrier, so it's worth avoiding this.

Conveniently, it's quite easy to detect whether or not we actually
need the getufull() barrier: since the world is stopped when we enter
mark termination, everything must have flushed its work to the work
queue, so we can just check the queue. If the queue is empty and we
haven't queued up any jobs that may create more work (which should
always be the case with the hybrid barrier), we can simply have all GC
workers perform non-blocking drains.

Also conveniently, this solution is quite safe. If we do somehow screw
something up and there's work on the work queue, some worker will
still process it, it just may not happen in parallel.

This is not the "right" solution, but it's simple, expedient,
low-risk, and maintains compatibility with debug.gcrescanstacks. When
we remove the gcrescanstacks fallback in Go 1.9, we should also fix
the race that starts mark termination early, and then we can eliminate
work draining from mark termination.

Updates #17503.

Change-Id: I7b3cd5de6a248ab29d78c2b42aed8b7443641361
Reviewed-on: https://go-review.googlesource.com/32186
Reviewed-by: Rick Hudson <rlh@golang.org>
ee3d201

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: implement unconditional hybrid barrier
This implements the unconditional version of the hybrid deletion write
barrier, which always shades both the old and new pointer. It's
unconditional for now because barriers on channel operations require
checking both the source and destination stacks and we don't have a
way to funnel this information into the write barrier at the moment.

As part of this change, we modify the typed memclr operations
introduced earlier to invoke the write barrier.

This has basically no overall effect on benchmark performance. This is
good, since it indicates that neither the extra shade nor the new bulk
clear barriers have much effect. It also has little effect on latency.
This is expected, since we haven't yet modified mark termination to
take advantage of the hybrid barrier.

Updates #17503.

Change-Id: Iebedf84af2f0e857bd5d3a2d525f760b5cf7224b
Reviewed-on: https://go-review.googlesource.com/31765
Reviewed-by: Rick Hudson <rlh@golang.org>
5380b22

@gopherbot gopherbot pushed a commit that referenced this issue Oct 28, 2016

@aclements aclements runtime: disable stack rescanning by default
With the hybrid barrier in place, we can now disable stack rescanning
by default. This commit adds a "gcrescanstacks" GODEBUG variable that
is off by default but can be set to re-enable STW stack rescanning.
The plan is to leave this off but available in Go 1.8 for debugging
and as a fallback.

With this change, worst-case mark termination time at GOMAXPROCS=12
*not* including time spent stopping the world (which is still
unbounded) is reliably under 100 µs, with a 95%ile around 50 µs in
every benchmark I tried (the go1 benchmarks, the x/benchmarks garbage
benchmark, and the gcbench activegs and rpc benchmarks). Including
time spent stopping the world usually adds about 20 µs to total STW
time at GOMAXPROCS=12, but I've seen it add around 150 µs in these
benchmarks when a goroutine takes time to reach a safe point (see
issue #10958) or when stopping the world races with goroutine
switches. At GOMAXPROCS=1, where this isn't an issue, worst case STW
is typically 30 µs.

The go-gcbench activegs benchmark is designed to stress large numbers
of dirty stacks. This commit reduces 95%ile STW time for 500k dirty
stacks by nearly three orders of magnitude, from 150ms to 195µs.

This has little effect on the throughput of the go1 benchmarks or the
x/benchmarks benchmarks.

name         old time/op  new time/op  delta
XGarbage-12  2.31ms ± 0%  2.32ms ± 1%  +0.28%  (p=0.001 n=17+16)
XJSON-12     12.4ms ± 0%  12.4ms ± 0%  +0.41%  (p=0.000 n=18+18)
XHTTP-12     11.8µs ± 0%  11.8µs ± 1%    ~     (p=0.492 n=20+18)

It reduces the tail latency of the x/benchmarks HTTP benchmark:

name      old p50-time  new p50-time  delta
XHTTP-12    489µs ± 0%    491µs ± 1%  +0.54%  (p=0.000 n=20+18)

name      old p95-time  new p95-time  delta
XHTTP-12    957µs ± 1%    960µs ± 1%  +0.28%  (p=0.002 n=20+17)

name      old p99-time  new p99-time  delta
XHTTP-12   1.76ms ± 1%   1.64ms ± 1%  -7.20%  (p=0.000 n=20+18)

Comparing to the beginning of the hybrid barrier implementation
("runtime: parallelize STW mcache flushing") shows that the hybrid
barrier trades a small performance impact for much better STW latency,
as expected. The magnitude of the performance impact is generally
small:

name                      old time/op    new time/op    delta
BinaryTree17-12              2.37s ± 1%     2.42s ± 1%  +2.04%  (p=0.000 n=19+18)
Fannkuch11-12                2.84s ± 0%     2.72s ± 0%  -4.00%  (p=0.000 n=19+19)
FmtFprintfEmpty-12          44.2ns ± 1%    45.2ns ± 1%  +2.20%  (p=0.000 n=17+19)
FmtFprintfString-12          130ns ± 1%     134ns ± 0%  +2.94%  (p=0.000 n=18+16)
FmtFprintfInt-12             114ns ± 1%     117ns ± 0%  +3.01%  (p=0.000 n=19+15)
FmtFprintfIntInt-12          176ns ± 1%     182ns ± 0%  +3.17%  (p=0.000 n=20+15)
FmtFprintfPrefixedInt-12     186ns ± 1%     187ns ± 1%  +1.04%  (p=0.000 n=20+19)
FmtFprintfFloat-12           251ns ± 1%     250ns ± 1%  -0.74%  (p=0.000 n=17+18)
FmtManyArgs-12               746ns ± 1%     761ns ± 0%  +2.08%  (p=0.000 n=19+20)
GobDecode-12                6.57ms ± 1%    6.65ms ± 1%  +1.11%  (p=0.000 n=19+20)
GobEncode-12                5.59ms ± 1%    5.65ms ± 0%  +1.08%  (p=0.000 n=17+17)
Gzip-12                      223ms ± 1%     223ms ± 1%  -0.31%  (p=0.006 n=20+20)
Gunzip-12                   38.0ms ± 0%    37.9ms ± 1%  -0.25%  (p=0.009 n=19+20)
HTTPClientServer-12         77.5µs ± 1%    78.9µs ± 2%  +1.89%  (p=0.000 n=20+20)
JSONEncode-12               14.7ms ± 1%    14.9ms ± 0%  +0.75%  (p=0.000 n=20+20)
JSONDecode-12               53.0ms ± 1%    55.9ms ± 1%  +5.54%  (p=0.000 n=19+19)
Mandelbrot200-12            3.81ms ± 0%    3.81ms ± 1%  +0.20%  (p=0.023 n=17+19)
GoParse-12                  3.17ms ± 1%    3.18ms ± 1%    ~     (p=0.057 n=20+19)
RegexpMatchEasy0_32-12      71.7ns ± 1%    70.4ns ± 1%  -1.77%  (p=0.000 n=19+20)
RegexpMatchEasy0_1K-12       946ns ± 0%     946ns ± 0%    ~     (p=0.405 n=18+18)
RegexpMatchEasy1_32-12      67.2ns ± 2%    67.3ns ± 2%    ~     (p=0.732 n=20+20)
RegexpMatchEasy1_1K-12       374ns ± 1%     378ns ± 1%  +1.14%  (p=0.000 n=18+19)
RegexpMatchMedium_32-12      107ns ± 1%     107ns ± 1%    ~     (p=0.259 n=18+20)
RegexpMatchMedium_1K-12     34.2µs ± 1%    34.5µs ± 1%  +1.03%  (p=0.000 n=18+18)
RegexpMatchHard_32-12       1.77µs ± 1%    1.79µs ± 1%  +0.73%  (p=0.000 n=19+18)
RegexpMatchHard_1K-12       53.6µs ± 1%    54.2µs ± 1%  +1.10%  (p=0.000 n=19+19)
Template-12                 61.5ms ± 1%    63.9ms ± 0%  +3.96%  (p=0.000 n=18+18)
TimeParse-12                 303ns ± 1%     300ns ± 1%  -1.08%  (p=0.000 n=19+20)
TimeFormat-12                318ns ± 1%     320ns ± 0%  +0.79%  (p=0.000 n=19+19)
Revcomp-12 (*)               509ms ± 3%     504ms ± 0%    ~     (p=0.967 n=7+12)
[Geo mean]                  54.3µs         54.8µs       +0.88%

(*) Revcomp is highly non-linear, so I only took samples with 2
iterations.

name         old time/op  new time/op  delta
XGarbage-12  2.25ms ± 0%  2.32ms ± 1%  +2.74%  (p=0.000 n=16+16)
XJSON-12     11.6ms ± 0%  12.4ms ± 0%  +6.81%  (p=0.000 n=18+18)
XHTTP-12     11.6µs ± 1%  11.8µs ± 1%  +1.62%  (p=0.000 n=17+18)

Updates #17503.

Updates #17099, since you can't have a rescan list bug if there's no
rescan list. I'm not marking it as fixed, since gcrescanstacks can
still be set to re-enable the rescan lists.

Change-Id: I6e926b4c2dbd4cd56721869d4f817bdbb330b851
Reviewed-on: https://go-review.googlesource.com/31766
Reviewed-by: Rick Hudson <rlh@golang.org>
bd640c8
Member

aclements commented Oct 28, 2016

The hybrid barrier is now live on master.

Detailed benchmark results are in the commit message for bd640c8. The high level summary is that this reduces worst-case STW time to about 100 µs and typical 95%ile STW time to 50 µs (assuming, of course, that the OS doesn't get in the way and that the system isn't otherwise overloaded). Performance impact is about 1% on average and goes up to about 5% for pointer-intensive workloads. However, we've more than paid for this with other throughput improvements in Go 1.8, so nearly all of the benchmarks still show a performance gain relative to Go 1.7.

Member

aclements commented Oct 28, 2016

@cherrymui, I think you're right that channel operations only need the second shade if the source stack is grey and the destination stack is black. For 1.8 we're always performing the second shade (channel operations or not), which is more conservative for correctness and makes it easier to have the GODEBUG setting to fall back to the current behavior (or a superset thereof). But I'll definitely consider your observation in depth for Go 1.9 when I look at making the second shade conditional.

CL https://golang.org/cl/36621 mentions this issue.

CL https://golang.org/cl/36619 mentions this issue.

CL https://golang.org/cl/36620 mentions this issue.

@gopherbot gopherbot pushed a commit that referenced this issue Feb 14, 2017

@aclements aclements runtime: remove rescan list
With the hybrid barrier, rescanning stacks is no longer necessary so
the rescan list is no longer necessary. Remove it.

This leaves the gcrescanstacks GODEBUG variable, since it's useful for
debugging, but changes it to simply walk all of the Gs to rescan
stacks rather than using the rescan list.

We could also remove g.gcscanvalid, which is effectively a distributed
rescan list. However, it's still useful for gcrescanstacks mode and it
adds little complexity, so we'll leave it in.

Fixes #17099.
Updates #17503.

Change-Id: I776d43f0729567335ef1bfd145b75c74de2cc7a9
Reviewed-on: https://go-review.googlesource.com/36619
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
c5ebcd2

@gopherbot gopherbot pushed a commit that referenced this issue Feb 14, 2017

@aclements aclements runtime: remove stack barriers
Now that we don't rescan stacks, stack barriers are unnecessary. This
removes all of the code and structures supporting them as well as
tests that were specifically for stack barriers.

Updates #17503.

Change-Id: Ia29221730e0f2bbe7beab4fa757f31a032d9690c
Reviewed-on: https://go-review.googlesource.com/36620
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
d089a6c

@gopherbot gopherbot pushed a commit that referenced this issue Feb 14, 2017

@aclements aclements runtime: remove g.stackAlloc
Since we're no longer stealing space for the stack barrier array from
the stack allocation, the stack allocation is simply
g.stack.hi-g.stack.lo.

Updates #17503.

Change-Id: Id9b450ae12c3df9ec59cfc4365481a0a16b7c601
Reviewed-on: https://go-review.googlesource.com/36621
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
0993b2f
Owner

bradfitz commented May 3, 2017

Austin, looks like this can be closed? I'm trying to close or remilestone all Go1.9Early bugs.

@bradfitz bradfitz modified the milestone: Go1.9, Go1.9Early May 3, 2017

@aclements aclements modified the milestone: Go1.10Early, Go1.9 Jun 7, 2017

@bradfitz bradfitz modified the milestone: Go1.10Early, Go1.10 Jun 14, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment