Description
In go1.19 was introduced that stack are created basen on the historical average size. Even though this has improved the landscape for our applications, we still experience issues with some of them.
To give one real example. Let's introduce a 15k cores application. Originally the profiles were showing close to 10% cpu being consumed by stack growth. If we translate this to cores that's ~1.5k cores.

The stack usage is very low (we never exceed 16M):

After adding to our metrics to publish the runtime/metrics that exposes the initial stack size we could observe that 99% of the time the stack size is set to 2KB.
By using go:linkname we were able to expose the initial stack size global variable. So what we did was to enable "gcshrinkstackoff" and disable "adaptivestackstart", to not shrink stacks and to avoid the runtime overriding our value.



Finally we ran several experiments injecting different stack sizes:
We decided to stop at 16KB because the gains were approaching 0, but as you can see we were able to reduce copystack by 7%.
Memory went up, but it is still reasonable. From 16MB to 50MB.
With all that said, would it be possible for the Go runtime to expose:
- A safe way to inject our own stack size (even if it is static).
- A histogram with all the stack sizes it has seen, this would helps decide what size would gives us the best results without using too much memory.
Or do you have any idea on how to implement this more safely (not requiring private linking).
Metadata
Metadata
Assignees
Labels
Type
Projects
Status