Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: GOTRACEBACK=crash doesn't print user stack for threads on system stack #19494

aclements opened this issue Mar 10, 2017 · 2 comments


Copy link

@aclements aclements commented Mar 10, 2017

What version of Go are you using (go version)?

go version go1.8 linux/amd64

What did you do?

Run a program with GOTRACEBACK=crash where some of the threads are on the system stack. This was observed in the go1.8.txt traceback from It can be reproduced by running the following program with GOGC=1 GOTRACEBACK=crash and sending it SIGQUIT (it may take a few tries since it's timing-dependent):

package main

import (

var slice = make([]*byte, 16<<20)
var ballast = mkTree(10)
var sink interface{}

func main() {
	const count = 4
	runtime.GOMAXPROCS(count + 1)

	// Get the garbage collector going so typedslicecopy switches
	// to the system stack. We have to do this before starting
	// loop since GC won't be able to start once we're in loop.
	go func() {
		for {
			sink = make([]byte, 16<<20)
	time.Sleep(10 * time.Millisecond)

	for i := 0; i < count; i++ {
		go loop(i)
	select {}

func loop(i int) {
	for {
		for j := 0; j < 0x7fffffff; j++ {
			// This runs on the system stack when GC is active.
			copy(slice, slice[1:])

type node struct {
	l, r *node

func mkTree(depth int) *node {
	if depth <= 0 {
		return nil
	return &node{mkTree(depth - 1), mkTree(depth - 1)}

What did you expect to see?

A traceback with the full stacks of all goroutines in loop.

What did you see instead?

When a loop goroutine is on the system stack when the traceback happens, we only see the system stack part:

PC=0x44a0d8 m=4 sigcode=0

goroutine 0 [idle]:
runtime.memmove(0xc421459f30, 0xc421459f38, 0x8)
	/home/austin/.cache/gover/1.8/src/runtime/memmove_amd64.s:168 +0x6a8 fp=0xc420081f20 sp=0xc420081f18
runtime.typedmemmove(0x459a00, 0xc421459f30, 0xc421459f38)
	/home/austin/.cache/gover/1.8/src/runtime/mbarrier.go:246 +0x3c fp=0xc420081f58 sp=0xc420081f20
	/home/austin/.cache/gover/1.8/src/runtime/mbarrier.go:362 +0x220 fp=0xc420081fb8 sp=0xc420081f58
	/home/austin/.cache/gover/1.8/src/runtime/asm_amd64.s:327 +0x79 fp=0xc420081fc0 sp=0xc420081fb8
	/home/austin/.cache/gover/1.8/src/runtime/proc.go:1132 fp=0xc420081fc8 sp=0xc420081fc0
goroutine 36 [running]:
rax    0x459a00
rbx    0x8
rcx    0xc421459f30
rdx    0xc421459f38
rdi    0xc421459f30
rsi    0xc421459f38
rbp    0xc420081f48
rsp    0xc420081f18
r8     0x3
r9     0x0
r10    0xc421459f30
r11    0x474762
r12    0x0
r13    0x34
r14    0x0
r15    0xf3
rip    0x44a0d8
rflags 0x246
cs     0x33
fs     0x0
gs     0x0
@aclements aclements added this to the Go1.8.1 milestone Mar 10, 2017
@aclements aclements self-assigned this Mar 10, 2017

This comment has been minimized.

Copy link

@gopherbot gopherbot commented Mar 10, 2017

CL mentions this issue.


This comment has been minimized.

Copy link

@gopherbot gopherbot commented Apr 5, 2017

CL mentions this issue.

gopherbot pushed a commit that referenced this issue Apr 5, 2017
…ing GOTRACBEACK=crash

Currently, when printing tracebacks of other threads during
GOTRACEBACK=crash, if the thread is on the system stack we print only
the header for the user goroutine and fail to print its stack. This
happens because we passed the g0 to traceback instead of curg. The g0
never has anything set in its gobuf, so traceback doesn't print

Fix this by passing _g_.m.curg to traceback instead of the g0.

Fixes #19494.
Fixes #19637 (backport).

Change-Id: Idfabf94d6a725e9cdf94a3923dead6455ef3b217
Run-TryBot: Austin Clements <>
Reviewed-by: Russ Cox <>
TryBot-Result: Gobot Gobot <>
@golang golang locked and limited conversation to collaborators Apr 5, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants
You can’t perform that action at this time.