Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: potential deadlock cycle caused by scavenge.lock [1.13 backport] #34150

gopherbot opened this issue Sep 6, 2019 · 1 comment


Copy link

commented Sep 6, 2019

@mknyszek requested issue #34047 to be considered for backport to the next 1.13 minor release.

This has the potential for reduced stability in Go 1.13. While the chance of deadlock is extremely low, when it hits you it will tend to hit you consistently, because stack depth is consistent (for example #32105).

CC @aclements

@gopherbot Please open a backport issue for 1.13.


This comment has been minimized.

Copy link

commented Sep 9, 2019

Just in case.

It seems we were affected by this in It was in production and we were not able to debug/trace it properly but we found that most of the connections were blocked (every one is managed by one or several goroutines) and the RSS reached the instances RAM (32 GB) and were killed by the OOM.

We went back to a version compiled with 1.12 and didn't see the issue again:

$ ps axl| grep smart-relayer
4 0 7584 1 20 0 3754992 947680 - Ssl ? 46:10 /usr/local/bin/smart-relayer -c /usr/local/etc/relayer.conf

As you can see, the runtime starts tens of threads:
$ ps -T -p 7584 | wc
103 515 4522

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
2 participants
You can’t perform that action at this time.