-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: deadlock while all goroutines are "IO wait" #64894
Comments
Here is the output (the beginning is truncated): I run it with I am trying to make it run with 1 group with traffic and another instance with 1 group without traffic to check if the number of goroutines matter to trigger the dead lock and if we need to receive something or not. |
I am unable to reproduce it with only one group (so, only with 2 goroutines), even if the group does not receive traffic. I am trying the reverse by subscribing to 2500 groups to see if it's easier to trigger the issue. It often happens around 970-975 minutes or a multiple of that (eg 4885). I don't see how this value is special, but maybe it is significant in the way Go is accounting for time? |
If it's possible to reproduce again with |
Here it is. |
Change https://go.dev/cl/555696 mentions this issue: |
Go version
go version go1.21.5 linux/amd64
What operating system and processor architecture are you using (
go env
)?What did you do?
What did you expect to see?
No deadlock.
What did you see instead?
Dead lock.
There are 202 goroutines in total. Except the first one, every other goroutine is exactly in the same state: stuck on
/home/bernat/code/free/monitor-rtp/main.go:58
//usr/lib/go-1.21/src/runtime/netpoll.go:343 +0x85
, statusIO wait
orIO wait, 973 minutes
.The issue takes some time to appear (here 16 hours, but sometimes several days). I was unable to reproduce it quickly.
The text was updated successfully, but these errors were encountered: