Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
runtime: deadlock involving gcControllerState.enlistWorker #19112
We have seen one instance of a production job suddenly spinning to 100% CPU and becoming unresponsive. In that one instance, a SIGQUIT was sent after 328 minutes of spinning, and the stacks showed a single goroutine in "IO wait (scan)" state.
Looking for things that might get stuck if a goroutine got stuck in scanning a stack, we found that injectglist does:
and that casgstatus spins on gp.atomicstatus until the _Gscan bit goes away. Essentially, this code locks sched.lock and then while holding sched.lock, waits to lock gp.atomicstatus.
The code that is doing the scan is:
More analysis showed that scanstack can, in a rare case, end up calling back into code that acquires sched.lock. For example:
This path was found with an automated deadlock-detecting tool by @aclements. There are many such paths but they all go through enlistWorker -> wakep.
The evidence strongly suggests that one of these paths is what caused the deadlock we observed. We're running those jobs with GOTRACEBACK=crash now to try to get more information if it happens again.
Further refinement and analysis by @aclements and me shows that if we drop the wakep call from enlistWorker, the remaining few deadlock cycles found by the tool are all false positives caused by not understanding the effect of calls to func variables.
For Go 1.8 we intend to drop the enlistWorker -> wakep call. It was intended only as a performance optimization, it rarely executes, and if it does execute at just the wrong time it can (and plausibly did) cause the deadlock we saw.