New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: non-cooperative goroutine preemption #24543

Open
aclements opened this Issue Mar 26, 2018 · 48 comments

Comments

Projects
None yet
@aclements
Copy link
Member

aclements commented Mar 26, 2018

I propose that we solve #10958 (preemption of tight loops) using non-cooperative preemption techniques. I have a detailed design proposal, which I will post shortly. This issue will track this specific implementation approach, as opposed to the general problem.

Edit: Design doc

Currently, Go currently uses compiler-inserted cooperative preemption points in function prologues. The majority of the time, this is good enough to allow Go developers to ignore preemption and focus on writing clear parallel code, but it has sharp edges that we've seen degrade the developer experience time and time again. When it goes wrong, it goes spectacularly wrong, leading to mysterious system-wide latency issues (#17831, #19241) and sometimes complete freezes (#543, #12553, #13546, #14561, #15442, #17174, #20793, #21053). And because this is a language implementation issue that exists outside of Go's language semantics, these failures are surprising and very difficult to debug.

@dr2chase has put significant effort into prototyping cooperative preemption points in loops, which is one way to solve this problem. However, even sophisticated approaches to this led to unacceptable slow-downs in tight loops (where slow-downs are generally least acceptable).

I propose that the Go implementation switch to non-cooperative preemption using stack and register maps at (essentially) every instruction. This would allow goroutines to be preempted without explicit
preemption checks. This approach will solve the problem of delayed preemption with zero run-time overhead and have side benefits for debugger function calls (#21678).

I've already prototyped significant components of this solution, including constructing register maps and recording stack and register maps at every instruction and so far the results are quite promising.

/cc @drchase @RLH @randall77 @minux

@aclements aclements added this to the Go1.12 milestone Mar 26, 2018

@aclements aclements self-assigned this Mar 26, 2018

@gopherbot gopherbot added the Proposal label Mar 26, 2018

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Mar 26, 2018

Change https://golang.org/cl/102600 mentions this issue: design: add 24543-non-cooperative-preemption

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Mar 26, 2018

Change https://golang.org/cl/102603 mentions this issue: cmd/compile: detect simple inductive facts in prove

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Mar 26, 2018

Change https://golang.org/cl/102604 mentions this issue: cmd/compile: don't produce a past-the-end pointer in range loops

@aclements

This comment has been minimized.

Copy link
Member

aclements commented Mar 27, 2018

Forwarding some questions from @hyangah on the CL:

Are code in cgo (or outside Go) considered non-safe points?

All of cgo is currently considered a safe-point (one of the reasons it's relatively expensive to enter and exit cgo) and this won't change.

Or will runtime be careful not to send signal to the threads who may be in cgo land?

I don't think the runtime can avoid sending signals to threads that may be in cgo without expensive synchronization on common paths, but I don't think it matters. When it enters the runtime signal handler it can recognize that it was in cgo and do the appropriate thing (which will probably be to just ignore it, or maybe queue up an action like stack scanning).

Should users or cgo code avoid using the signal?

It should be okay if cgo code uses the signal, as long as it's correctly chained. I'm hoping to use POSIX real-time signals on systems where they're available, so the runtime will attempt to find one that's unused (which is usually all of them anyway), though that isn't an option on Darwin.

And a question from @randall77 (which I answered on the CL, but should have answered here):

Will we stop using the current preemption technique (the dummy large stack bound) altogether, or will the non-coop preemption just be a backstop?

There's really no cost to the current technique and we'll continue to rely on it in the runtime for the foreseeable future, so my current plan is to leave it in. However, we could be much more aggressive about removing stack bounds checks (for example if we can prove that a whole call tree will fit in the nosplit zone).

@TocarIP

This comment has been minimized.

Copy link
Contributor

TocarIP commented Mar 27, 2018

So it is still possible to make goroutine nonpreemptable with something like:
sha256.Sum(make([]byte,1000000000))
where inner loop is written in asm?

@aclements

This comment has been minimized.

Copy link
Member

aclements commented Mar 27, 2018

Yes, that would still make a goroutine non-preemptible. However, with some extra annotations in the assembly to indicate registers containing pointers it will become preemptible without any extra work or run-time overhead to reach an explicit safe-point. In the case of sha256.Sum these annotations would probably be trivial since it will never construct a pointer that isn't shadowed by the arguments (so it can claim there are no pointers in registers).

I'll add a paragraph to the design doc about this.

@komuw

This comment has been minimized.

Copy link
Contributor

komuw commented Mar 28, 2018

will the design doc be posted here?

@aclements

This comment has been minimized.

Copy link
Member

aclements commented Mar 28, 2018

gopherbot pushed a commit to golang/proposal that referenced this issue Mar 28, 2018

design: add 24543-non-cooperative-preemption
For golang/go#24543.

Change-Id: Iba313a963aafcd93521bb9e006cb32d1f242301b
Reviewed-on: https://go-review.googlesource.com/102600
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
@aclements

This comment has been minimized.

Copy link
Member

aclements commented Mar 28, 2018

The doc is now submitted: Proposal: Non-cooperative goroutine preemption

@mtstickney

This comment has been minimized.

Copy link

mtstickney commented Mar 30, 2018

Disclaimer: I'm not a platform expert, or an expert on language implementations, or involved with go aside from having written a few toy programs in it. That said:

There's a (potentially) fatal flaw here: GetThreadContext doesn't actually work on Windows (see here for details). There are several lisp implementations that have exhibited crashes on that platform because they tried to use GetThreadContext/SetThreadContext to implement preemptive signals on Windows.

As some old notes for SBCL point out, Windows has no working version of preemptive signals without loading a kernel driver, which is generally prohibitive for applications.

@JamesBielby

This comment has been minimized.

Copy link

JamesBielby commented Mar 31, 2018

I think the example code to avoid creating a past-the-end pointer has a problem if the slice has a capacity of 0. You need to declare _p after the first if statement.

@creker

This comment has been minimized.

Copy link

creker commented Mar 31, 2018

@mtstickney looks like it's true but we can look for other implementations, how they go about the same problem. CoreCLR talks about the same problem - they need to preempt threads for GC and talk about the same bugs with wrong thread context. And they also talk about how they solve it without ditching SuspendThread altogether by using redirection.

I'm not an expert in this kind of stuff so I'm sorry if this has nothing to do with solving the problem here.

@mtstickney

This comment has been minimized.

Copy link

mtstickney commented Mar 31, 2018

@creker Nor me, so we're in the same boat there. I hadn't seen the CoreCLR reference before, but that's the same idea as the lisp approach: SuspendThread, retrieve the current register set with GetThreadContext, change IP to point to the signal code to be run, ResumeThread, then when the handler is finished restore the original registers with SetThreadContext.

The trick is capturing the original register set: you can either do it with an OS primitive (GetThreadContext, which is buggy), or roll your own code for it. If you do the latter, you're at risk for getting a bogus set of registers because your register-collecting code is in user-mode, and could be preempted by a kernel APC.

It looks like on some Windows versions, some of the time, you can detect and avoid the race conditions with GetThreadContext (see this post, particularly the comments concerning CONTEXT_EXCEPTION_REQUEST). The CoreCLR code seems to make some attempts to work around the race condition, although I don't know if it's suitable here.

@aclements

This comment has been minimized.

Copy link
Member

aclements commented Mar 31, 2018

Thanks for the pointers about GetThreadContext! That's really interesting and good to know, but I think it's actually not a problem.

For GC preemption, we can always resume the same goroutine on the same thread after preemption, so there's no need to call SetThreadContext to hijack the thread. We just need to observe its state; not run something else on that thread. Furthermore, my understanding is that GetThreadContext doesn't reliably return all registers if the thread is in a syscall, but in this case there won't be any live pointers in registers anyway (any pointer arguments to the syscall are shadowed on the Go wrapper's stack). Hence, we only need to retrieve the PC and SP in this case. Even this may not matter, since we currently treat a syscall as a giant GC safe-point, so we already save the information we need on the way in to the syscall.

For scheduler preemption, things are a bit more complicated, but I think still okay. In this case we would need to call SetThreadContext to hijack the thread, but we would only do this to threads at Go safe-points, meaning we'd never preempt something in a syscall. Today, if a goroutine has been in a syscall for too long, we don't hijack the thread, we simply flag that it should block upon returning from the syscall and schedule the next goroutine on a different thread (creating a new one or going to a pool). We would keep using that mechanism for rescheduling goroutines that are in system calls.

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Apr 20, 2018

Change https://golang.org/cl/108497 mentions this issue: cmd/compile: teach Haspointer about TSSA and TTUPLE

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Apr 20, 2018

Change https://golang.org/cl/108496 mentions this issue: cmd/compile: don't lower OpConvert

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Apr 20, 2018

Change https://golang.org/cl/108498 mentions this issue: cmd/compile: don't compact liveness maps in place

gopherbot pushed a commit that referenced this issue Apr 20, 2018

cmd/compile: don't lower OpConvert
Currently, each architecture lowers OpConvert to an arch-specific
OpXXXconvert. This is silly because OpConvert means the same thing on
all architectures and is logically a no-op that exists only to keep
track of conversions to and from unsafe.Pointer. Furthermore, lowering
it makes it harder to recognize in other analyses, particularly
liveness analysis.

This CL eliminates the lowering of OpConvert, leaving it as the
generic op until code generation time.

The main complexity here is that we still need to register-allocate
OpConvert operations. Currently, each arch's lowered OpConvert
specifies all GP registers in its register mask. Ideally, OpConvert
wouldn't affect value homing at all, and we could just copy the home
of OpConvert's source, but this can potentially home an OpConvert in a
LocalSlot, which neither regalloc nor stackalloc expect. Rather than
try to disentangle this assumption from regalloc and stackalloc, we
continue to register-allocate OpConvert, but teach regalloc that
OpConvert can be allocated to any allocatable GP register.

For #24543.

Change-Id: I795a6aee5fd94d4444a7bafac3838a400c9f7bb6
Reviewed-on: https://go-review.googlesource.com/108496
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>

gopherbot pushed a commit that referenced this issue Apr 20, 2018

cmd/compile: teach Haspointer about TSSA and TTUPLE
These will appear when tracking live pointers in registers, so we need
to know whether they have pointers.

For #24543.

Change-Id: I2edccee39ca989473db4b3e7875ff166808ac141
Reviewed-on: https://go-review.googlesource.com/108497
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>

gopherbot pushed a commit that referenced this issue Apr 23, 2018

cmd/compile: don't compact liveness maps in place
Currently Liveness.compact rewrites the Liveness.livevars slice in
place. However, we're about to add register maps, which we'll want to
track in livevars, but compact independently from the stack maps.
Hence, this CL modifies Liveness.compact to consume Liveness.livevars
and produce a new slice of deduplicated stack maps. This is somewhat
clearer anyway because it avoids potential confusion over how
Liveness.livevars is indexed.

Passes toolstash -cmp.

For #24543.

Change-Id: I7093fbc71143f8a29e677aa30c96e501f953ca2b
Reviewed-on: https://go-review.googlesource.com/108498
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Apr 25, 2018

Change https://golang.org/cl/109351 mentions this issue: cmd/compile: dense numbering for GP registers

gopherbot pushed a commit that referenced this issue May 22, 2018

cmd/compile: incrementally compact liveness maps
The per-Value slice of liveness maps is currently one of the largest
sources of allocation in the compiler. On cmd/compile/internal/ssa,
it's 5% of overall allocation, or 75MB in total. Enabling liveness
maps everywhere significantly increased this allocation footprint,
which in turn slowed down the compiler.

Improve this by compacting the liveness maps after every block is
processed. There are typically very few distinct liveness maps, so
compacting the maps after every block, rather than at the end of the
function, can significantly reduce these allocations.

Passes toolstash -cmp.

name        old time/op       new time/op       delta
Template          198ms ± 2%        196ms ± 1%  -1.11%  (p=0.008 n=9+10)
Unicode           100ms ± 1%         99ms ± 1%  -0.94%  (p=0.015 n=8+9)
GoTypes           703ms ± 2%        695ms ± 1%  -1.15%  (p=0.000 n=10+10)
Compiler          3.38s ± 3%        3.33s ± 0%  -1.66%  (p=0.000 n=10+9)
SSA               7.96s ± 1%        7.93s ± 1%    ~ 	(p=0.113 n=9+10)
Flate             134ms ± 1%        132ms ± 1%  -1.30%  (p=0.000 n=8+10)
GoParser          165ms ± 2%        163ms ± 1%  -1.32%  (p=0.013 n=9+10)
Reflect           462ms ± 2%        459ms ± 0%  -0.65%  (p=0.036 n=9+8)
Tar               188ms ± 2%        186ms ± 1%    ~     (p=0.173 n=8+10)
XML               243ms ± 7%        239ms ± 1%    ~     (p=0.684 n=10+10)
[Geo mean]        421ms             416ms       -1.10%

name        old alloc/op      new alloc/op      delta
Template         38.0MB ± 0%       36.5MB ± 0%  -3.98%  (p=0.000 n=10+10)
Unicode          30.3MB ± 0%       29.6MB ± 0%  -2.21% 	(p=0.000 n=10+10)
GoTypes           125MB ± 0%        120MB ± 0%  -4.51% 	(p=0.000 n=10+9)
Compiler          575MB ± 0%        546MB ± 0%  -5.06% 	(p=0.000 n=10+10)
SSA              1.64GB ± 0%       1.55GB ± 0%  -4.97% 	(p=0.000 n=10+10)
Flate            25.9MB ± 0%       25.0MB ± 0%  -3.41% 	(p=0.000 n=10+10)
GoParser         30.7MB ± 0%       29.5MB ± 0%  -3.97% 	(p=0.000 n=10+10)
Reflect          84.1MB ± 0%       81.9MB ± 0%  -2.64% 	(p=0.000 n=10+10)
Tar              37.0MB ± 0%       35.8MB ± 0%  -3.27% 	(p=0.000 n=10+9)
XML              47.2MB ± 0%       45.0MB ± 0%  -4.57% 	(p=0.000 n=10+10)
[Geo mean]       83.2MB            79.9MB       -3.86%

name        old allocs/op     new allocs/op     delta
Template           337k ± 0%         337k ± 0%  -0.06%  (p=0.000 n=10+10)
Unicode            340k ± 0%         340k ± 0%  -0.01% 	(p=0.014 n=10+10)
GoTypes           1.18M ± 0%        1.18M ± 0%  -0.04% 	(p=0.000 n=10+10)
Compiler          4.97M ± 0%        4.97M ± 0%  -0.03% 	(p=0.000 n=10+10)
SSA               12.3M ± 0%        12.3M ± 0%  -0.01% 	(p=0.000 n=10+10)
Flate              226k ± 0%         225k ± 0%  -0.09% 	(p=0.000 n=10+10)
GoParser           283k ± 0%         283k ± 0%  -0.06% 	(p=0.000 n=10+9)
Reflect            972k ± 0%         971k ± 0%  -0.04% 	(p=0.000 n=10+8)
Tar                333k ± 0%         332k ± 0%  -0.05% 	(p=0.000 n=10+9)
XML                395k ± 0%         395k ± 0%  -0.04% 	(p=0.000 n=10+10)
[Geo mean]         764k              764k       -0.04%

Updates #24543.

Change-Id: I6fdc46e4ddb6a8eea95d38242345205eb8397f0b
Reviewed-on: https://go-review.googlesource.com/110177
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>

gopherbot pushed a commit that referenced this issue May 22, 2018

cmd/compile: make LivenessMap dense
Currently liveness information is kept in a map keyed by *ssa.Value.
This made sense when liveness information was sparse, but now we have
liveness for nearly every ssa.Value. There's a fair amount of memory
and CPU overhead to this map now.

This CL replaces this map with a slice indexed by value ID.

Passes toolstash -cmp.

name        old time/op       new time/op       delta
Template          197ms ± 1%        194ms ± 1%  -1.60%  (p=0.000 n=9+10)
Unicode           100ms ± 2%         99ms ± 1%  -1.31%  (p=0.012 n=8+10)
GoTypes           695ms ± 1%        689ms ± 0%  -0.94%  (p=0.000 n=10+10)
Compiler          3.34s ± 2%        3.29s ± 1%  -1.26%  (p=0.000 n=10+9)
SSA               8.08s ± 0%        8.02s ± 2%  -0.70%  (p=0.034 n=8+10)
Flate             133ms ± 1%        131ms ± 1%  -1.04%  (p=0.006 n=10+9)
GoParser          163ms ± 1%        162ms ± 1%  -0.79%  (p=0.034 n=8+10)
Reflect           459ms ± 1%        454ms ± 0%  -1.06%  (p=0.000 n=10+8)
Tar               186ms ± 1%        185ms ± 1%  -0.87%  (p=0.003 n=9+9)
XML               238ms ± 1%        235ms ± 1%  -1.01%  (p=0.004 n=8+9)
[Geo mean]        418ms             414ms       -1.06%

name        old alloc/op      new alloc/op      delta
Template         36.4MB ± 0%       35.6MB ± 0%  -2.29%  (p=0.000 n=9+10)
Unicode          29.7MB ± 0%       29.5MB ± 0%  -0.68%  (p=0.000 n=10+10)
GoTypes           119MB ± 0%        117MB ± 0%  -2.30%  (p=0.000 n=9+9)
Compiler          546MB ± 0%        532MB ± 0%  -2.47%  (p=0.000 n=10+10)
SSA              1.59GB ± 0%       1.55GB ± 0%  -2.41%  (p=0.000 n=10+10)
Flate            24.9MB ± 0%       24.5MB ± 0%  -1.77%  (p=0.000 n=8+10)
GoParser         29.5MB ± 0%       28.7MB ± 0%  -2.60%  (p=0.000 n=9+10)
Reflect          81.7MB ± 0%       80.5MB ± 0%  -1.49%  (p=0.000 n=10+10)
Tar              35.7MB ± 0%       35.1MB ± 0%  -1.64%  (p=0.000 n=10+10)
XML              45.0MB ± 0%       43.7MB ± 0%  -2.76%  (p=0.000 n=9+10)
[Geo mean]       80.1MB            78.4MB       -2.04%

name        old allocs/op     new allocs/op     delta
Template           336k ± 0%         335k ± 0%  -0.31%  (p=0.000 n=9+10)
Unicode            339k ± 0%         339k ± 0%  -0.05%  (p=0.000 n=10+10)
GoTypes           1.18M ± 0%        1.18M ± 0%  -0.26%  (p=0.000 n=10+10)
Compiler          4.96M ± 0%        4.94M ± 0%  -0.24%  (p=0.000 n=10+10)
SSA               12.6M ± 0%        12.5M ± 0%  -0.30%  (p=0.000 n=10+10)
Flate              224k ± 0%         223k ± 0%  -0.30%  (p=0.000 n=10+10)
GoParser           282k ± 0%         281k ± 0%  -0.32%  (p=0.000 n=10+10)
Reflect            965k ± 0%         963k ± 0%  -0.27%  (p=0.000 n=9+10)
Tar                331k ± 0%         330k ± 0%  -0.27%  (p=0.000 n=10+10)
XML                393k ± 0%         392k ± 0%  -0.26%  (p=0.000 n=10+10)
[Geo mean]         763k              761k       -0.26%

Updates #24543.

Change-Id: I4cfd2461510d3c026a262760bca225dc37482341
Reviewed-on: https://go-review.googlesource.com/110178
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>

gopherbot pushed a commit that referenced this issue May 22, 2018

cmd/compile: reuse liveness structures
Currently liveness analysis is a significant source of allocations in
the compiler. This CL mitigates this by moving the main sources of
allocation to the ssa.Cache, allowing them to be reused between
different liveness runs.

Passes toolstash -cmp.

name        old time/op       new time/op       delta
Template          194ms ± 1%        193ms ± 1%    ~     (p=0.156 n=10+9)
Unicode          99.1ms ± 1%       99.3ms ± 2%    ~     (p=0.853 n=10+10)
GoTypes           689ms ± 0%        687ms ± 0%  -0.27% 	(p=0.022 n=10+9)
Compiler          3.29s ± 1%        3.30s ± 1%    ~ 	(p=0.489 n=9+9)
SSA               8.02s ± 2%        7.97s ± 1%  -0.71%  (p=0.011 n=10+10)
Flate             131ms ± 1%        130ms ± 1%  -0.59%  (p=0.043 n=9+10)
GoParser          162ms ± 1%        160ms ± 1%  -1.53%  (p=0.000 n=10+10)
Reflect           454ms ± 0%        454ms ± 0%    ~    	(p=0.959 n=8+8)
Tar               185ms ± 1%        185ms ± 2%    ~ 	(p=0.905 n=9+10)
XML               235ms ± 1%        232ms ± 1%  -1.15% 	(p=0.001 n=9+10)
[Geo mean]        414ms             412ms       -0.39%

name        old alloc/op      new alloc/op      delta
Template         35.6MB ± 0%       34.2MB ± 0%  -3.75%  (p=0.000 n=10+10)
Unicode          29.5MB ± 0%       29.4MB ± 0%  -0.26%  (p=0.000 n=10+9)
GoTypes           117MB ± 0%        112MB ± 0%  -3.78%  (p=0.000 n=9+10)
Compiler          532MB ± 0%        512MB ± 0%  -3.80%  (p=0.000 n=10+10)
SSA              1.55GB ± 0%       1.48GB ± 0%  -4.82%  (p=0.000 n=10+10)
Flate            24.5MB ± 0%       23.6MB ± 0%  -3.61%  (p=0.000 n=10+9)
GoParser         28.7MB ± 0%       27.7MB ± 0%  -3.43%  (p=0.000 n=10+10)
Reflect          80.5MB ± 0%       78.1MB ± 0%  -2.96%  (p=0.000 n=10+10)
Tar              35.1MB ± 0%       33.9MB ± 0%  -3.49%  (p=0.000 n=10+10)
XML              43.7MB ± 0%       42.4MB ± 0%  -3.05%  (p=0.000 n=10+10)
[Geo mean]       78.4MB            75.8MB       -3.30%

name        old allocs/op     new allocs/op     delta
Template           335k ± 0%         335k ± 0%  -0.12%  (p=0.000 n=10+10)
Unicode            339k ± 0%         339k ± 0%  -0.01%  (p=0.001 n=10+10)
GoTypes           1.18M ± 0%        1.17M ± 0%  -0.12%  (p=0.000 n=10+10)
Compiler          4.94M ± 0%        4.94M ± 0%  -0.06%  (p=0.000 n=10+10)
SSA               12.5M ± 0%        12.5M ± 0%  -0.07%  (p=0.000 n=10+10)
Flate              223k ± 0%         223k ± 0%  -0.11%  (p=0.000 n=10+10)
GoParser           281k ± 0%         281k ± 0%  -0.08%  (p=0.000 n=10+10)
Reflect            963k ± 0%         960k ± 0%  -0.23%  (p=0.000 n=10+9)
Tar                330k ± 0%         330k ± 0%  -0.12%  (p=0.000 n=10+10)
XML                392k ± 0%         392k ± 0%  -0.08%  (p=0.000 n=10+10)
[Geo mean]         761k              760k       -0.10%

Compared to just before "cmd/internal/obj: consolidate emitting entry
stack map", the cumulative effect of adding stack maps everywhere and
register maps, plus these optimizations, is:

name        old time/op       new time/op       delta
Template          186ms ± 1%        194ms ± 1%  +4.41%  (p=0.000 n=9+10)
Unicode          96.5ms ± 1%       99.1ms ± 1%  +2.76%  (p=0.000 n=9+10)
GoTypes           659ms ± 1%        689ms ± 0%  +4.56%  (p=0.000 n=9+10)
Compiler          3.14s ± 2%        3.29s ± 1%  +4.95%  (p=0.000 n=9+9)
SSA               7.68s ± 3%        8.02s ± 2%  +4.41%  (p=0.000 n=10+10)
Flate             126ms ± 0%        131ms ± 1%  +4.14%  (p=0.000 n=10+9)
GoParser          153ms ± 1%        162ms ± 1%  +5.90%  (p=0.000 n=10+10)
Reflect           436ms ± 1%        454ms ± 0%  +4.14%  (p=0.000 n=10+8)
Tar               177ms ± 1%        185ms ± 1%  +4.28%  (p=0.000 n=8+9)
XML               224ms ± 1%        235ms ± 1%  +5.23%  (p=0.000 n=10+9)
[Geo mean]        396ms             414ms       +4.47%

name        old alloc/op      new alloc/op      delta
Template         34.5MB ± 0%       35.6MB ± 0%  +3.24%  (p=0.000 n=10+10)
Unicode          29.3MB ± 0%       29.5MB ± 0%  +0.51%  (p=0.000 n=9+10)
GoTypes           113MB ± 0%        117MB ± 0%  +3.31%  (p=0.000 n=8+9)
Compiler          509MB ± 0%        532MB ± 0%  +4.46%  (p=0.000 n=10+10)
SSA              1.49GB ± 0%       1.55GB ± 0%  +4.10%  (p=0.000 n=10+10)
Flate            23.8MB ± 0%       24.5MB ± 0%  +2.92%  (p=0.000 n=10+10)
GoParser         27.9MB ± 0%       28.7MB ± 0%  +2.88%  (p=0.000 n=10+10)
Reflect          77.4MB ± 0%       80.5MB ± 0%  +4.01%  (p=0.000 n=10+10)
Tar              34.1MB ± 0%       35.1MB ± 0%  +3.12%  (p=0.000 n=10+10)
XML              42.6MB ± 0%       43.7MB ± 0%  +2.65%  (p=0.000 n=10+10)
[Geo mean]       76.1MB            78.4MB       +3.11%

name        old allocs/op     new allocs/op     delta
Template           320k ± 0%         335k ± 0%  +4.60%  (p=0.000 n=10+10)
Unicode            336k ± 0%         339k ± 0%  +0.96%  (p=0.000 n=9+10)
GoTypes           1.12M ± 0%        1.18M ± 0%  +4.55%  (p=0.000 n=10+10)
Compiler          4.66M ± 0%        4.94M ± 0%  +6.18%  (p=0.000 n=10+10)
SSA               11.9M ± 0%        12.5M ± 0%  +5.37%  (p=0.000 n=10+10)
Flate              214k ± 0%         223k ± 0%  +4.15%  (p=0.000 n=9+10)
GoParser           270k ± 0%         281k ± 0%  +4.15%  (p=0.000 n=10+10)
Reflect            921k ± 0%         963k ± 0%  +4.49%  (p=0.000 n=10+10)
Tar                317k ± 0%         330k ± 0%  +4.25%  (p=0.000 n=10+10)
XML                375k ± 0%         392k ± 0%  +4.75%  (p=0.000 n=10+10)
[Geo mean]         729k              761k       +4.34%

Updates #24543.

Change-Id: Ia951fdb3c17ae1c156e1d05fc42e69caba33c91a
Reviewed-on: https://go-review.googlesource.com/110179
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
@wsc1

This comment has been minimized.

Copy link

wsc1 commented Sep 30, 2018

Hi all, a question about this design. First, I've only read it from a high level, and haven't followed the use cases where the cooperative preemption is problematic.

But the big picture question I have is: what about code which depends on cooperative preemption?
I don't think we can know the extent of it.

But I can give an example where non-cooperative preemption might be problematic, on a project I am working on. The context is communication with an OS/C thread via atomics and not cgo, where timing is very sensitive: real time deadline misses will render things useless (audio i/o).

If currently, some code controls the pre-emption via spinning and runtime.Gosched(), then it seems to me this proposal will break that code because it will introduce preemption and hence delay the thing which is programmed to not be pre-empted.

Again, there is no way for us to know how much such code there is, and without an assessment of that, it seems to me this proposal risks entering a game of whack-a-mole w.r.t. go scheduling, where you solve one problem and as a result another pops up.

Please don't take away programmer control of pre-emption.

Last question: what good could runtime.Gosched() possibly serve with per-instruction pre-emption?

Again, sorry I don't know the details of this proposal, but that might be the case with a lot of other code that uses runtime.Gosched() under the assumption of cooperative pre-emption.

@networkimprov

This comment has been minimized.

Copy link

networkimprov commented Oct 25, 2018

@wsc1 can you provide a code example which breaks without cooperative scheduling?

@wsc1

This comment has been minimized.

Copy link

wsc1 commented Oct 25, 2018

@networkimprov please see
#10958
for status and rationale of the problem and problems related to testing for it via the cloud.

@crvv

This comment has been minimized.

Copy link
Contributor

crvv commented Oct 26, 2018

@wsc1
No one said the test program should run via cloud.

@wsc1

This comment has been minimized.

Copy link

wsc1 commented Oct 26, 2018

@wsc1
No one said the test program should run via cloud.

@crvv testing audio latency violations is hard. it isn't expected to agree except on the same hard ware under the same operating conditions or under OS simulation, and the scheduler random seed can come into play. re-creating those things to placate the interests of pre-emptive scheduling is not my job, although I'm happy to help along the way. No one said what hardware or OS operating conditions either.

There are also a myriad of reasons why pre-emption in a real-time audio processing loop would cause problems documented there by audio processing professionals. The Go runtime uses locks and the memory management can lock the system. These things are widely accepted as things which cause glitches in real time audio because they can take longer than the real-wall-clock-time allocated to a low latency application. This is widely accepted. It is also widely accepted that the glitches "will eventually" happen, meaning that it is very hard to create a test in which they do in a snap because it depends on the whole system (OS, hardware, go runtime) state.

I do not find it so credible to quote out of context against the grain of best practice in the field. You could also provide a reason why you want a test on the github issue tracker. Do you believe that the worst case real-wall-clock-time of pre-emption doings inserted into a critical segment of real-time audio processing code shouldn't cause problems? Why?

To me, the burden of proof lies there. On my end, I'll continue to provide what info and alternatives I can to help inform the discussion.

@ianlancetaylor ianlancetaylor changed the title proposal: runtime: non-cooperative goroutine preemption runtime: non-cooperative goroutine preemption Dec 21, 2018

@aclements aclements modified the milestones: Go1.12, Go1.13 Jan 8, 2019

@aclements

This comment has been minimized.

Copy link
Member

aclements commented Jan 17, 2019

But I can give an example where non-cooperative preemption might be problematic, on a project I am working on. The context is communication with an OS/C thread via atomics and not cgo, where timing is very sensitive: real time deadline misses will render things useless (audio i/o).

Non-cooperative preemption as proposed shouldn't introduce any more overhead or jitter than you would already see from any general-purpose operating system handling hardware and timer interrupts. If the system is not overloaded, which is already a requirement for such work, then it will simply resume from the preemption after perhaps a few thousand instructions. And we could probably optimize the scheduler to not even attempt preemption if it would be a no-op. On the flip side, tight atomic loops like you describe currently run the very real risk of deadlocking the entire Go system, which would certainly violate any latency target.

Again, there is no way for us to know how much such code there is ...

There's quite a lot of evidence that cooperative preemption has caused a large number of issues for many users (see #543, #12553, #13546, #14561, #15442, #17174, #20793, #21053 for a sampling).

@ayanamist

This comment has been minimized.

Copy link

ayanamist commented Jan 18, 2019

@aclements It's easy to write some micro program to show this infinite loop, like all issues you mentioned, but hard to find a real case in a larger project. I can not see such goroutine running without calling any non-inlinable functions has any useful purpose.

@ianlancetaylor

This comment has been minimized.

Copy link
Contributor

ianlancetaylor commented Jan 18, 2019

@ayanamist This bug has been reported multiple times based on real code in real projects, including cases mentioned in the issues cited above. It's true that the cut down versions look like unimportant toys, and it's true that the root cause is often a bug in the original code, but this is still surprising behavior in the language that has tripped up many people over the years.

@empijei

This comment has been minimized.

Copy link
Contributor

empijei commented Jan 21, 2019

I'd just like to add some data to this discussion.

I gave a talk at several conferences [1,2,3,4,5] about this topic and collected some user experiences and feedback that can be summarized as:

  • Lack of preemption leads to very hard to debug behaviors (some latency spikes would only manifest if a GC happened in the right moment).
  • Some security issues (mostly DoS) were caused by this.
  • This is a surprising trait of the language, which is generally bad.

I believe this should be addressed, as long as some way to annotate code as not preemptible is provided (surprises should be opt-in, and not opt-out).

@wsc1

This comment has been minimized.

Copy link

wsc1 commented Jan 21, 2019

I'd just like to add some data to this discussion.

I gave a talk at several conferences [1,2,3,4,5] about this topic and collected some user experiences and feedback that can be summarized as:

  • Lack of preemption leads to very hard to debug behaviors (some latency spikes would only manifest if a GC happened in the right moment).
  • Some security issues (mostly DoS) were caused by this.
  • This is a surprising trait of the language, which is generally bad.

Thanks very much for the well balanced data.

I wanted to especially emphasise agreement with your last statement:

I believe this should be addressed, as long as some way to annotate code as not preemptible is provided (surprises should be opt-in, and not opt-out).

because not allowing un-preemptible code :

  1. reduces the expressivity and control of the programmer.
  2. is a major headache to take into account for certain problem domains, such as low latency audio or timed interactions with a device driver.
  3. renders many requirements on OS supplied interface for a callback impossible to fulfill (eg don't allocate memory, don't do sys-calls, etc)

I am not arguing that the above list is more important to take into account than the problems caused by the surprise, just emphasising that a way to annotate code as not pre-emptible should be a requirement.

@wsc1

This comment has been minimized.

Copy link

wsc1 commented Jan 21, 2019

I would like to ask that the title of this issue be changed so that it is not in the form of an opinion about the solution but rather in the form of stated problems. I think that might help guide the discussion toward consensus, as there is plenty of evidence of lack of consensus for the pre-emptive vs cooperative scheduling.

For example, "scheduling improvements" may be a less divisive choice.

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Jan 21, 2019

Change https://golang.org/cl/158857 mentions this issue: design/24543-non-cooperative-preemption: more alternatives

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Jan 21, 2019

Change https://golang.org/cl/158861 mentions this issue: design/24543-non-cooperative-preemption: conservative inner-frame scanning design

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Jan 21, 2019

Change https://golang.org/cl/158858 mentions this issue: design/24543-non-cooperative-preemption: discuss signal selection

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Jan 21, 2019

Change https://golang.org/cl/158859 mentions this issue: design/24543-non-cooperative-preemption: split out safe-points-everywhere

@gopherbot

This comment has been minimized.

Copy link

gopherbot commented Jan 21, 2019

Change https://golang.org/cl/158860 mentions this issue: design/24543-non-cooperative-preemption: general preemption discussion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment