Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: new goroutines can spend excessive time in morestack #18138

Open
petermattis opened this issue Dec 1, 2016 · 57 comments
Open

runtime: new goroutines can spend excessive time in morestack #18138

petermattis opened this issue Dec 1, 2016 · 57 comments
Labels
NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. Performance
Milestone

Comments

@petermattis
Copy link

What version of Go are you using (go version)?

go version devel +41908a5 Thu Dec 1 02:54:21 2016 +0000 darwin/amd64 a.k.a go1.8beta1

What operating system and processor architecture are you using (go env)?

GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/pmattis/Development/go"
GORACE=""
GOROOT="/Users/pmattis/Development/go-1.8"
GOTOOLDIR="/Users/pmattis/Development/go-1.8/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/qc/fpqpgdqd167c70dtc6840xxh0000gn/T/go-build385423377=/tmp/go-build -gno-record-gcc-switches -fno-common"
CXX="clang++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"

What did you do?

A recent change to github.com/cockroachdb/cockroach replaced a synchronous call with one wrapped in a goroutine. This small change resulted in a significant slowdown in some benchmarks. The slowdown was traced to additional time being spent in runtime.morestack. The problematic goroutines are all hitting a single gRPC entrypoint Server.Batch and the code paths that fan out from this entrypoint tend to use an excessive amount of stack due to an over reliance on passing and returning by value instead of using pointers. Typical calls use 16-32 KB of stack.

The expensive part of runtime.morestack is the adjustment of existing pointers on the stack. And due to the incremental nature of the stack growth, I can see the stack growing in 4 steps from 2 KB to 32 KB. So we experimented with a hack to pre-grow the stack. Voila, the performance penalty of the change disappeared:

name               old time/op  new time/op  delta
KVInsert1_SQL-8     339µs ± 2%   312µs ± 1%   -7.89%  (p=0.000 n=10+10)
KVInsert10_SQL-8    485µs ± 2%   471µs ± 1%   -2.81%  (p=0.000 n=10+10)
KVInsert100_SQL-8  1.36ms ± 0%  1.35ms ± 0%   -0.95%  (p=0.000 n=10+10)
KVUpdate1_SQL-8     535µs ± 1%   487µs ± 1%   -9.02%   (p=0.000 n=10+9)
KVUpdate10_SQL-8    777µs ± 1%   730µs ± 1%   -6.03%   (p=0.000 n=10+9)
KVUpdate100_SQL-8  2.69ms ± 1%  2.66ms ± 1%   -1.16%  (p=0.000 n=10+10)
KVDelete1_SQL-8     479µs ± 1%   429µs ± 2%  -10.43%   (p=0.000 n=9+10)
KVDelete10_SQL-8    676µs ± 1%   637µs ± 1%   -5.80%    (p=0.000 n=9+9)
KVDelete100_SQL-8  2.23ms ± 5%  2.18ms ± 4%     ~     (p=0.105 n=10+10)
KVScan1_SQL-8       216µs ± 5%   179µs ± 1%  -17.12%  (p=0.000 n=10+10)
KVScan10_SQL-8      233µs ± 1%   201µs ± 1%  -13.76%  (p=0.000 n=10+10)
KVScan100_SQL-8     463µs ± 1%   437µs ± 0%   -5.64%   (p=0.000 n=10+8)

old are benchmarks gathered using go1.8beta1 and new are on go1.8beta1 with the hack to pre-grow the stack. The hack is a call at the beginning of server.Batch to a growStack method:

var growStackGlobal = false

//go:noinline
func growStack() {
	// Goroutine stacks currently start at 2 KB in size. The code paths through
	// the storage package often need a stack that is 32 KB in size. The stack
	// growth is mildly expensive making it useful to trick the runtime into
	// growing the stack early. Since goroutine stacks grow in multiples of 2 and
	// start at 2 KB in size, by placing a 16 KB object on the stack early in the
	// lifetime of a goroutine we force the runtime to use a 32 KB stack for the
	// goroutine.
	var buf [16 << 10] /* 16 KB */ byte
	if growStackGlobal {
		// Make sure the compiler doesn't optimize away buf.
		for i := range buf {
			buf[i] = byte(i)
		}
	}
}

The question here is whether this is copacetic and also to alert the runtime folks that there is a performance opportunity here. Note that the growStackGlobal is not currently necessary, but I wanted to future proof against the compiler deciding that buf is not necessary.

Longer term, the stack usage under server.Batch should be reduced on our side. I'm guessing that we could get the stack usage down to 8-16 KB without too many contortions. But even with such reductions, being able to pre-grow the stack for a goroutine looks beneficial.

@bradfitz bradfitz added this to the Go1.9 milestone Dec 1, 2016
@bradfitz
Copy link
Contributor

bradfitz commented Dec 1, 2016

/cc @aclements @randall77

@aclements
Copy link
Member

We've seen this a few times now. I'm not sure what the right answer is. My best thought so far is that the runtime could keep track of when particular go statements always lead to stack growth right away (for some value of "right away" and "always") and learn to start goroutines from that site with a larger stack. Of course, it would be hard to make this behavior predictable, but perhaps it would still be less surprising than the current behavior. If the runtime did learn to start a goroutine with a larger stack, it would still need a signal to learn if the stack should get smaller again, but we could do that efficiently by allocating the larger stack but setting the stack bounds to something smaller. Then the runtime could still observe whether or not the stack needs to grow, but the actual growth would be basically free until it reached the size of the allocation.

@randall77, thoughts, ideas?

/cc @RLH

@mrjrieke
Copy link

mrjrieke commented Dec 1, 2016

I like @petermattis idea of being able to hint stack size on a per goroutine basis, although this implies developers have the knowhow to identify and provide size estimates accurately. Could this be done with a compiler directive?

@bradfitz
Copy link
Contributor

bradfitz commented Dec 1, 2016

We don't want compiler directives in code. We have some used by the runtime out of necessity, but they're gross. Go prefers simplicity over tons of knobs.

@petermattis
Copy link
Author

Yes, please just make my code magically faster as you've been doing for the last several Go releases.

@mrjrieke
Copy link

mrjrieke commented Dec 1, 2016

I generally agree with not having compiler directives ... magic is nice, although they (compiler directives) do exist even in go. It's an interesting opportunity either way you decide.

@mrjrieke
Copy link

mrjrieke commented Dec 2, 2016

@bradfitz, your comment prompted me to look for the go guiding principles ( https://golang.org/doc/faq#principles). Thanks @adg as well for nicely worded principles.

@gopherbot
Copy link

CL https://golang.org/cl/45142 mentions this issue.

@aclements
Copy link
Member

@petermattis (or anyone who has a good reproducer for this), would you be able to try https://go-review.googlesource.com/45142? It's a trivial hack, but it might actually do the trick. I haven't benchmarked it on anything, so it may also slow things down.

@aclements aclements modified the milestones: Go1.10Early, Go1.9 Jun 8, 2017
@aclements aclements self-assigned this Jun 8, 2017
@petermattis
Copy link
Author

@aclements I'll try and test either tomorrow or next week.

@petermattis
Copy link
Author

@aclements Applying that patch to go1.8.3 resulted in no benefit (this is with the growStack hack disabled):

~/Development/go/src/github.com/cockroachdb/cockroach/pkg/sql master benchstat out.old out.new
name                old time/op  new time/op  delta
KV/Insert1_SQL-8     363µs ± 3%   369µs ± 2%  +1.43%  (p=0.043 n=10+10)
KV/Insert10_SQL-8    583µs ± 0%   581µs ± 1%    ~     (p=0.113 n=10+9)
KV/Insert100_SQL-8  2.05ms ± 0%  2.05ms ± 1%    ~     (p=0.912 n=10+10)
KV/Update1_SQL-8     578µs ± 1%   577µs ± 1%    ~     (p=0.968 n=9+10)
KV/Update10_SQL-8    913µs ± 1%   914µs ± 1%    ~     (p=0.931 n=9+9)
KV/Update100_SQL-8  3.80ms ± 1%  3.87ms ± 5%  +1.90%  (p=0.019 n=10+10)
KV/Delete1_SQL-8     517µs ± 2%   518µs ± 2%    ~     (p=0.912 n=10+10)
KV/Delete10_SQL-8    813µs ± 2%   809µs ± 1%    ~     (p=0.280 n=10+10)
KV/Delete100_SQL-8  3.22ms ± 2%  3.26ms ± 3%    ~     (p=0.052 n=10+10)
KV/Scan1_SQL-8       217µs ± 1%   216µs ± 0%    ~     (p=0.090 n=9+10)
KV/Scan10_SQL-8      238µs ± 0%   238µs ± 1%    ~     (p=0.122 n=10+8)
KV/Scan100_SQL-8     454µs ± 0%   455µs ± 1%    ~     (p=0.089 n=10+10)

Surprising to me this didn't have any effect. Compare this to the growStack hack mentioned earlier:

~/Development/go/src/github.com/cockroachdb/cockroach/pkg/sql master benchstat out.old out.grow-stack
name                old time/op  new time/op  delta
KV/Insert1_SQL-8     363µs ± 3%   331µs ± 2%   -8.82%  (p=0.000 n=10+10)
KV/Insert10_SQL-8    583µs ± 0%   561µs ± 1%   -3.80%  (p=0.000 n=10+10)
KV/Insert100_SQL-8  2.05ms ± 0%  2.03ms ± 0%   -0.88%  (p=0.000 n=10+8)
KV/Update1_SQL-8     578µs ± 1%   532µs ± 1%   -7.94%  (p=0.000 n=9+10)
KV/Update10_SQL-8    913µs ± 1%   872µs ± 1%   -4.47%  (p=0.000 n=9+9)
KV/Update100_SQL-8  3.80ms ± 1%  3.75ms ± 1%   -1.36%  (p=0.000 n=10+10)
KV/Delete1_SQL-8     517µs ± 2%   458µs ± 2%  -11.54%  (p=0.000 n=10+10)
KV/Delete10_SQL-8    813µs ± 2%   765µs ± 1%   -5.91%  (p=0.000 n=10+10)
KV/Delete100_SQL-8  3.22ms ± 2%  3.16ms ± 1%   -2.01%  (p=0.000 n=10+10)
KV/Scan1_SQL-8       217µs ± 1%   194µs ± 1%  -10.44%  (p=0.000 n=9+10)
KV/Scan10_SQL-8      238µs ± 0%   216µs ± 1%   -9.36%  (p=0.000 n=10+10)
KV/Scan100_SQL-8     454µs ± 0%   431µs ± 1%   -4.92%  (p=0.000 n=10+9)

@josharian
Copy link
Contributor

CL 43150 might help a little here.

@aclements
Copy link
Member

Sorry, I made a silly mistake in CL 45142. Would you mind trying the new version of that CL?

@petermattis
Copy link
Author

With your updated patch against go-tip (f363817) there is an improvement:

~/Development/go/src/github.com/cockroachdb/cockroach/pkg/sql master benchstat out.old out.new
name              old time/op  new time/op  delta
KV/Scan1_SQL-8     243µs ± 1%   224µs ± 0%  -7.57%  (p=0.000 n=9+9)
KV/Scan10_SQL-8    263µs ± 0%   247µs ± 0%  -6.20%  (p=0.000 n=9+10)
KV/Scan100_SQL-8   463µs ± 0%   444µs ± 0%  -4.05%  (p=0.000 n=10+10)

But the improvement is still not as good as the growStack hack:

~/Development/go/src/github.com/cockroachdb/cockroach/pkg/sql master benchstat out.new out.grow-stack
name              old time/op  new time/op  delta
KV/Scan1_SQL-8     224µs ± 0%   219µs ± 0%  -2.24%  (p=0.000 n=9+9)
KV/Scan10_SQL-8    247µs ± 0%   240µs ± 1%  -2.59%  (p=0.000 n=10+10)
KV/Scan100_SQL-8   444µs ± 0%   439µs ± 0%  -1.06%  (p=0.000 n=10+9)

There is a little more performance if we increase the initial stack size to 32 KB:

~/Development/go/src/github.com/cockroachdb/cockroach/pkg/sql master benchstat out.old out.new2
name              old time/op  new time/op  delta
KV/Scan1_SQL-8     243µs ± 1%   209µs ± 1%  -13.76%  (p=0.000 n=9+9)
KV/Scan10_SQL-8    263µs ± 0%   232µs ± 2%  -11.61%  (p=0.000 n=9+10)
KV/Scan100_SQL-8   463µs ± 0%   445µs ± 4%   -3.86%  (p=0.000 n=10+9)

Interestingly, all of these timings are lower than with go1.8.3.

@petermattis
Copy link
Author

Interestingly, all of these timings are lower than with go1.8.3.

Nothing to see here. This appears to be due to a change on our code between what I tested earlier today and now.

@bradfitz bradfitz added early-in-cycle A change that should be done early in the 3 month dev cycle. and removed early-in-cycle A change that should be done early in the 3 month dev cycle. labels Jun 14, 2017
@bradfitz bradfitz modified the milestones: Go1.10Early, Go1.10 Jun 14, 2017
@petermattis
Copy link
Author

I did some more testing of this patch and the performance improvements carry over to production settings. morestack disappears from profiles. Note this is using a version of the patch which uses a 32KB initial stack size.

@petermattis
Copy link
Author

It is early in the 1.10 cycle and wanted to bring this issue forward again. See cockroachdb/cockroach#17242 for a graph showing the benefit of a larger initial stack size.

@petermattis
Copy link
Author

Is there any update on this issue? A larger initial goroutine stack size provides a nice performance boost for our system.

@gopherbot
Copy link

Change https://golang.org/cl/341990 mentions this issue: runtime: predict stack sizing

CAFxX added a commit to CAFxX/go that referenced this issue Aug 13, 2021
Goroutines can spend significant time in morestack/newstack if the dynamic
call tree is large, or if a specific call tree makes heavy use of the stack
to allocate significant amount of space for temporary variables.

This CL adds a simple stack size predictor based on the pc of the go
statement that starts each goroutine. This approach is predicated on the
assumption that a specific go statement in the program will mostly result
in the resulting goroutine executing the same dynamic call tree (or, more
in general, dynamic call trees with similar stack sizing requirements).

The way it works is by embedding in each P a small prediction table with 32
slots. When a go statement is executed, the pc of the go statement is
hashed with a per-P seed to pick one of the slots.
Each slot is a single byte containing a simple running estimator.
The result of the estimation is _StackMin << n, with n in the range 0 to 15
(i.e. 2KB to 64MB), and it is used to allocate a stack of the appropriate
size.
In newstack, called from morestack when we need to increase the size of the
stack of the running G, we record the highest stack size (highwater) used
by each goroutine (this highwater is stored in the g struct, but thanks to
existing padding in the g struct this additional 1-byte field does not
cause the struct to increase in size).
When a goroutine exits, the estimator for the pc of the statement that
started that goroutine is updated using the highwater value recorded by the
exiting goroutine.

The current estimation scheme is not precise for multiple reasons.

First, multiple pcs could map to the same slot in the per-P table. This is
not a significant problem under the assumption that the conflicting pcs are
not executed with the same frequency: in this case the estimator will still
converge, albeit more slowly, to the correct value.
Furthermore, each P uses a different seed when hashing the pc, and the
seeds themselves are periodically reset (currently, as a result of GC -
although this is not the only available option).

Second, the highwater mechanism partially relies on stackguard0; the
current runtime sometimes mangles stackguard0 (e.g. when a goroutine needs
to be preempted) and this can lead to the highwater of a goroutine to be
lower than its actual value: this also should not lead to problems, apart
from the estimator taking longer to converge to the true value.

The stack size prediction mechanism is currently disabled by default, and
can be enabled by setting GOEXPERIMENT=predictstacksize. The idea would be
to eventually enable it by default but keeping for a finite period of time
the ability to turn it off for debugging.

This CL is currently in a PoC state. It passes all tests locally, and shows
significant promise in the included benchmark, where enabling stack size
prediction leads to a doubling of the performance for medium-large stack
sizes.
It is still missing tests for the new feature pending discussion about the
proposed approach.

DO NOT SUBMIT

Fixes golang#18138

Change-Id: Id2f617e39bbd7ed969d35e1f231ab61c207fa572
CAFxX added a commit to CAFxX/go that referenced this issue Aug 15, 2021
Goroutines can spend significant time in morestack/newstack if the
dynamic call tree is large, or if a specific call tree makes heavy use of
the stack to allocate significant amount of space for temporary
variables.

This CL adds a simple stack size predictor based on the pc of the go
statement that starts each goroutine. This approach is predicated on the
assumption that a specific go statement in the program will mostly result
in the resulting goroutine mostly executing the same dynamic call tree
(more precisely, dynamic call trees with similar stack sizing
requirements).

The way it works is by embedding in each P a small prediction table with
32 slots. When a go statement is executed, the pc of the go statement is
hashed with a per-P seed to pick one of the slots. Each slot is a single
byte containing a simple running estimator. The result of the estimation
is _StackMin << n, with n in the range 0 to 15 (i.e. 2KB to 64MB), and it
is used to allocate a stack of the appropriate size. In newstack, called
from morestack when we need to allocate a new stack, we record the
highest stack size (highwater) used by each goroutine (this highwater is
stored in the g struct, but thanks to existing padding this additional
1-byte field does not cause the g struct to increase in size). When a
goroutine exits, the estimator for the pc of the statement that started
that goroutine is updated using the highwater value recorded by the
exiting goroutine.

The current estimation scheme is not precise for multiple reasons.

First, multiple PCs could map to the same slot in the per-P table. This
is not a significant problem if we assume that the conflicting pcs are
not executed with the same frequency as in this case the estimator will
still converge, albeit more slowly, to the correct value. Furthermore,
each P uses a different seed when hashing the pc, and the seeds
themselves are periodically reset (currently, as a result of GC -
although this is not the only available option).

Second, the highwater mechanism partially relies on stackguard0; the
current runtime sometimes mangles stackguard0 (e.g. when a goroutine
needs to be preempted) and this can lead to the highwater of a goroutine
to be lower than it should have been: this also should not lead to
problems, apart from the estimator taking longer to converge to the true
value.

The stack size prediction mechanism is disabled by default, and can be
enabled by setting GOEXPERIMENT=predictstacksize. The plan is to
eventually enable it default, and later remove the experiment
alltogether.

This CL is currently in a PoC state. It passes all tests locally, and
shows significant promise in the included benchmark, where enabling stack
size prediction leads to a doubling of the performance for medium-large
stack sizes. It is still missing tests for the new feature pending
discussion about the proposed approach.

DO NOT SUBMIT

Fixes golang#18138

Change-Id: Id2f617e39bbd7ed969d35e1f231ab61c207fa572
CAFxX added a commit to CAFxX/go that referenced this issue Aug 15, 2021
Goroutines can spend significant time in morestack/newstack if the
dynamic call tree is large, or if a specific call tree makes heavy use
of the stack to allocate significant amount of space for temporary
variables.

This CL adds a simple stack size predictor based on the pc of the go
statement that starts each goroutine. This approach is predicated on the
assumption that a specific go statement in the program will mostly
result in the resulting goroutine mostly executing the same dynamic call
tree (more precisely, dynamic call trees with similar stack sizing
requirements).

The way it works is by embedding in each P a small prediction table with
32 slots. When a go statement is executed, the pc of the go statement is
hashed with a per-P seed to pick one of the slots. Each slot is a single
byte containing a simple running estimator. The result of the estimation
is _StackMin << n, with n in the range 0 to 15 (i.e. 2KB to 64MB), and
it is used to allocate a stack of the appropriate size. In newstack,
called from morestack when we need to allocate a new stack, we record
the highest stack size used by each goroutine (the highwater mark is
stored in the g struct, but thanks to existing padding this additional
1-byte field does not cause the g struct to increase in size). When a
goroutine exits, the estimator for the pc of the statement that started
that goroutine is updated using the highwater value recorded by the
exiting goroutine.

The current estimation scheme is not precise for multiple reasons.

First, multiple PCs could map to the same slot in the per-P table. This
is not a significant problem if we assume that the conflicting pcs are
not executed with the same frequency as in this case the estimator will
still converge, albeit more slowly, to the correct value. Furthermore,
each P uses a different seed when hashing the pc, and the seeds
themselves are periodically reset (currently, as a result of GC -
although this is not the only available option).

Second, the highwater mechanism partially relies on stackguard0; the
current runtime sometimes mangles stackguard0 (e.g. when a goroutine
needs to be preempted) and this can lead to the highwater of a goroutine
to be lower than it should have been: this also should not lead to
problems, apart from the estimator taking longer to converge to the true
value.

The stack size prediction mechanism is disabled by default, and can be
enabled by setting GOEXPERIMENT=predictstacksize. The plan is to
eventually enable it default, and later remove the experiment
alltogether.

This CL is currently in a PoC state. It passes all tests locally, and
shows significant promise in the included benchmark, where enabling
stack size prediction leads to a doubling of the performance for
medium-large stack sizes. It is still missing tests for the new feature
pending discussion about the proposed approach.

DO NOT SUBMIT

Fixes golang#18138

Change-Id: Id2f617e39bbd7ed969d35e1f231ab61c207fa572
@CAFxX
Copy link
Contributor

CAFxX commented Aug 18, 2021

Forgot to mention it back here, but I have a CL up for early review that basically implements something similar to what @aclements suggested. It uses a fairly conservative approach, so in this early incarnation it may still leave a bit of performance on the table, but in the tests so far it seems to work well enough in practice to be useful. If anyone wants to run their own benchmarks and report back it would be great (you need to build from that CL and then set GOEXPERIMENT=predictstacksize). The upside is that it requires no knobs, annotations, or code changes.

@gopherbot
Copy link

Change https://golang.org/cl/345889 mentions this issue: runtime: measure stack usage; start stacks larger if needed

@randall77
Copy link
Contributor

I wrote up an idea I had about starting goroutines with a larger than minimum stack size, for this issue. The doc is here.
The immediate impetus for this doc was another attempt to fix that issue in CL 341990, but generally these ideas have been sloshing around my head for a while.
Comments welcome. I have a first stab at an implementation in CL 345889.

@go101
Copy link

go101 commented Oct 2, 2021

Improve @uluyol's idea by setting the initial stack size of a new goroutine to any 2n sizes:

func startRoutine() {
	// Use a dummy anonymous function to enlarge stack.
	func(x *interface{}) {
		type _ int // avoid being inlined
		if x != nil {
			*x = [128 << 20]byte{} // initial 256M stack
		}
	}(nil)
	
	// ... do work load
}

[update]: a demo: https://play.golang.org/p/r3t_OXxTvt7

@go101
Copy link

go101 commented Oct 2, 2021

Rather than a compiler directive, would it be possible to add functions to the runtime package

It would be great to add a runtime/debug/SetCurrentGoroutineOption function:

type GoroutineOption int
func SetCurrentGoroutineOption(key GoroutineOption, value int) {...}

const (
	StackSizeOfTheNextSpawndGoroutine GoroutineOption = iota
	PriorityCasesInTheNextSelectBlock
	...
)

@pcostanza
Copy link

It would be good if Go would support tail call elimination. Then at least in some cases, the stack wouldn't grow as much as it does now.

@ianlancetaylor
Copy link
Contributor

A straightforward implementation of tail call elimination would lead to difficulties with runtime.Caller and runtime.Callers.

See also #22624.

@pcostanza
Copy link

"Proper" tail recursion requires the full removal of the caller's stack frame when a function is call in tail position. If that is too extreme, maybe it's possible to only remove the part of the stack frame that is not needed for runtime.Caller and runtime.Callers, in the hope to reduce time spent in morestack?

@CAFxX
Copy link
Contributor

CAFxX commented Apr 29, 2022

only remove the part of the stack frame that is not needed for runtime.Caller and runtime.Callers, in the hope to reduce time spent in morestack

See #36067

@go101
Copy link

go101 commented May 14, 2022

It looks a new implementation will be adopted in Go 1.19.
After reading the design doc of the change, it looks now a global average stack size is used as the initial stack size for new spawn goroutines. Will this be efficient if the variance of goroutine stack sizes is large?

@randall77
Copy link
Contributor

Yes. Of course, it matters what exactly you mean by "efficient". The new behavior uses a bit more space for a bit less CPU.

Starting new goroutines with the average stack size will waste at most a factor of 2 in space (in addition to the at-most factor of 2 we already waste by growing stacks using power-of-two sizes). In exchange, we get less work on goroutine startup.

It is conceivable that the tradeoff is not advantageous for some programs. I suspect it will be a net win for most. I'm curious if anyone has an example where it is not the right thing to do.

@go101
Copy link

go101 commented May 14, 2022

I'm a little worrying about this change will bring more unpredictable/random factors for Go program performances.

@SaveTheRbtz
Copy link

SaveTheRbtz commented Feb 2, 2023

Is this still a problem after go 1.19?

The runtime will now allocate initial goroutine stacks based on the historic average stack usage of goroutines. This avoids some of the early stack growth and copying needed in the average case in exchange for at most 2x wasted space on below-average goroutines.

@randall77
Copy link
Contributor

I have not heard any complaints.

@marcogrecopriolo
Copy link

FWIW historic average stack usage makes absolutely no difference to the Couchbase's N1QL engine.
We still have one class of goroutines (the execution layer operators) that would benefit from having an initial stack size which is much larger than the historic stack size, and I still have to hack an initial stack allocation.
So I am basically fudging the code adding extra cost just to have a lower cost for initial stack allocation.

For us, if a compiler directive is not acceptable, at least having some compiler analysis that determines the initial stack size for a goroutine (you have sample code that you can use for testing from myself and other contributors) would work much better than the status quo.

@marcogrecopriolo
Copy link

FWIW I believe average stack sizes not to be a great approach, and stack sizes based on initial PC would have been much better.

In a classical server example, where each request is serviced by one or more goroutines, that stack size is not a random occurrence, but is a function of the work done by the individual goroutine.

In our case, all execution operators use a lot of stack, because they do the bulk of the work, everything else does not.
What the chosen approach achieves is to give consistently more stack to those goroutines that don't needed and less stack to those that do, ie it fixes nothing, while at the same time wasting stack.

To put this in context, on my performance testing cluster, we serve about 200K requests a second, and spawn 5 execution operators, which end up having to grow the stack about 1M times a second.
Requests take about 0.7ms from admission to completion.

Also, to put this in context, Informix's Online Dynamic Server, vintage 1996, used exactly the same multithreaded model adopted by golang (we had to code the context switches by hand though), and exactly the same execution approach as Couchbase's N1QL.
We started with 32K stack sizes, and the world was happy.

I have tried goroutine pooling, but it is quite the kerfufflle, and doesn't work as well as manually forcing an initial stack growth.
We need a better permanent solution, be it a directive (how can a single directive be a sin, when you are forcing me to use two directives and two bogus functions instead?), prediction based on PC, compiler analysis or a runtime function.

@roudkerk
Copy link
Contributor

roudkerk commented Feb 11, 2023 via email

@randall77 randall77 reopened this Feb 11, 2023
@randall77 randall77 added NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. and removed early-in-cycle A change that should be done early in the 3 month dev cycle. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Feb 11, 2023
@richardartoul
Copy link

Just chiming in that I'm still working around this issue in my performance-sensitive applications manually for the same reasons mentioned above, an average over the entire application is not reflective of the needs of my critical path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NeedsDecision Feedback is required from experts, contributors, and/or the community before a change can be made. Performance
Projects
None yet
Development

No branches or pull requests