Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: "fatal error: all goroutines are asleep - deadlock!" with GC assist wait #39390

prattmic opened this issue Jun 3, 2020 · 2 comments


Copy link

@prattmic prattmic commented Jun 3, 2020


fatal error: all goroutines are asleep - deadlock!

goroutine 1 [semacquire]:
        /tmp/workdir/go/src/runtime/sema.go:56 +0x30
        /tmp/workdir/go/src/sync/waitgroup.go:130 +0x76
        /tmp/workdir/go/src/cmd/compile/internal/gc/pgen.go:392 +0x190
        /tmp/workdir/go/src/cmd/compile/internal/gc/main.go:757 +0x30d6
        /tmp/workdir/go/src/cmd/compile/main.go:52 +0x8d

goroutine 22 [GC assist wait]:
        /tmp/workdir/go/src/cmd/compile/internal/ssa/cse.go:52 +0x301
        /tmp/workdir/go/src/cmd/compile/internal/ssa/compile.go:93 +0x873
cmd/compile/internal/gc.buildssa(0x3933c5b0, 0x3, 0x0)
        /tmp/workdir/go/src/cmd/compile/internal/gc/ssa.go:460 +0xa5d
cmd/compile/internal/gc.compileSSA(0x3933c5b0, 0x3)
        /tmp/workdir/go/src/cmd/compile/internal/gc/pgen.go:317 +0x4c
cmd/compile/internal/gc.compileFunctions.func2(0x3943a7c0, 0x39123430, 0x3)
        /tmp/workdir/go/src/cmd/compile/internal/gc/pgen.go:382 +0x35
created by cmd/compile/internal/gc.compileFunctions
        /tmp/workdir/go/src/cmd/compile/internal/gc/pgen.go:380 +0xf7

This is a failure on freebsd-386-11_2. I don't see any recent similar failures.

cc @mknyszek @aclements

Copy link
Member Author

@prattmic prattmic commented Jun 3, 2020

Copy link
Member Author

@prattmic prattmic commented Jun 4, 2020

@aclements pointed out that it is perhaps possible for GC goroutines to not be actively running yet (because the scheduler hasn't quite noticed) while there is a GC assist wait. But the mcount() check in checkdead seems like it should consider an M that hasn't yet considered GC work to still be running.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
1 participant