Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
runtime: OS thread appears to be re-used despite never releasing runtime.LockOSThread() #28979
What did you do?
In Go 1.11.2, spawned a goroutine, called
However, it appears that a separate goroutine started at the beginning of the process (which never calls unshare or does anything with mounts) will after some time be executing in a mount namespace that was created by the independent goroutine mentioned above (which is not expected to be leaking OS threads). This seems to indicate that the OS thread is somehow leaking for re-use by other goroutines despite never calling
This behavior was originally observed in a larger piece of software, but I was able to reproduce it using the code below. The code:
The goroutine that checks the contents of
Code was compiled with just
What did you expect to see?
That the program executed indefinitely, never crashing due to the a goroutine unexpectedly executing in a mount namespace created by a different goroutine locked to its os thread.
What did you see instead?
A mount namespace leaks to the separate go routine, causing the program to crash. The time it takes for this to occur is variable but between 1 and 10 seconds. One example:
Additionally, it seems that if I make a seemingly unrelated change, the issue is no longer reproducible. The change is to just remove the
I obviously can't know whether removing the channel changes the behavior due to a timing difference or whether there could be a bug related to a locked go routine sharing a channel with an unlocked go routine, but seems worth mentioning.
If helpful, I got some strace output showing a pid, 22828, that did unshare+mount calls followed up by a clone(2) to make a new thread (right before it calls
I think the problem is that
@ianlancetaylor, thanks for finding the problem!
I think the solution is to just keep
Anyway, I put up a change. I was able to reproduce the original issue and can confirm that with my change, it's fixed.