-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
writes to half-closed streams stall when sendWindow is exhausted #133
Comments
Thanks for filing an issue! This is a tricky one! I believe the existing code is correct, but I absolutely concede this is difficult to understand behavior. I think the key lies in 2.3.6 Stream Half-Close of the SPDY spec:
In other words To put it in protocol terms:
So in your example when the Server closes its stream, that does not imply the Client should consider the Stream closed. The Server stream closing only implies the Server will no longer send data. This begs the question: Should yamux differentiate Yamux only implements the RST flag on window update frames, but perhaps we should treat that like SPDY's? As far as I can tell the only place that Yamux RSTs individual Streams is when a timeout is hit. Workaround: Read ClosersThere is a workaround you should use while we consider the above: go func() {
_, err = cStream.Read([]byte{})
if err != nil {
log.Printf("Client.Stream.Read failed: %v", err)
cStream.Close()
}
}() I added the above to a fork of your original gist, and it fixes the deadlock-until-timeout. Since yamux only exposes a way for streams to indicate they will no longer send data, you can start a goroutine whose express purpose is to read from the server and detect that state. Calling You can see Nomad implementing this pattern in places like the log streaming RPC. |
I'm working on an application that uses yamux for proxying. I ran into a case where the application leaks goroutines. Here's a simplified test case: https://gist.github.com/slingamn/1dafab6141e03d27a3f51bcbbcdb9972
This test case sets up a client and a server. The client sends 10 messages to the server over a
*Stream
. The server reads 5 messages, then closes its side of the stream. After this happens, the client side continues writing messages untilsendWindow
is exhausted. Then it gets stuck here:yamux/stream.go
Lines 227 to 232 in 8bd691f
Since the other side has stopped reading, it will not send control messages that could unblock
sendNotifyCh
. So unless a write deadline has been set,Write()
blocks untilStreamCloseTimeout
(5 minutes by default).I naively tried to fix this by adding
streamRemoteClose
to the existing list of states (streamLocalClose
,streamClosed
) that cause writes to preemptively fail:yamux/stream.go
Lines 180 to 184 in 8bd691f
This broke
TestHalfClose
, from which I understand that the current behavior is expected, at least in part. But if this is expected, then I'm not sure what the in-band mechanism is for signaling that the other side of theStream
has gone away. Is there an existing API in yamux to tell the other side to stop sending? (For comparison,(*net.TCPConn).Close()
from the read side will cause writes to fail with EPIPE.)Thanks very much for your time.
The text was updated successfully, but these errors were encountered: