-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flushing a socket in a Live workflow #2760
Comments
In the live mode SRT serves only as a throughput. You receive the data as long as they are "in the flow". In order to force stopped reading on the receiver side, while having all the data received up to this moment to be delivered to the application, you'd require some feature like "shutdown" in particular direction, which should stop the socket from delivering any more data (incoming data packets are ignored), but this only would lock a particular sequence number (and still any numbers that precede it might undergo retransmission, if lost, for example) after which no more packets are accepted in the receiver buffer. In the meantime, the application could read whatever is remaining in the receiver buffer. This is moreover hard to implement because there's also the "congestion-blow problem": when the sender sends the data, and these are not read and not ACK-ed, at some point the maximum sequence for the current buffer state is exceeded and there's no other way than break the connection (which will likely happen even before the app has a chance to read these remaining data). This is how this problem is being solved now, and it will prevent any ability to implement this "shutdown" thing. In general the live mode isn't a data transmission. You transmit pictures and sound with the rhythm as they happen live - not "data". And in this case you can only have two main states: you can transmit and receive them at the moment, and so you do, or you can't, and the connection must be broken, with one additional intermediate state that if you exceeded the current transmission cap at the moment, you can drop some data (at least by default settings). When the connection is broken, there's no more transmission, not just the socket is closed. The file mode is completely different - you transmit the data as fast as you can, you can also slow them down as much as necessary to get a sensible throughput and low loss rate, and when the sending side closes the socket, you read the remaining data and the connection gets broken at the end. And the application is free to wait with extracting the data from the socket as much as it wants and the "linger" settings allow. So, these are those problems that have to be first overcome in order to implement this. The only way to do this is through some kind of "shutdown" or "pause" applied to the socket, which makes the incoming data ignored, or even fake-ACKed, you read what's left, and then you can close the socket, or even unblock it and read again. This is a lot of work with implementing such a thing. |
Thank you very much for your reply ethouris.
I thought a "shutdown" control command was already sent when the remote peer is closing properly. In any case, every components in a Live workflow includes some kind of buffering, including SRT. Pictures and sound are being transmitted at one moment, and received a moment later, and if the protocol requires some kind of communication with the emitting peer in order to deliver those to the application, then it should also be its job to handle a shutdown phase. Otherwise, as much as the "receive latency" of pictures and sound could be dropped by the SRT protocol upon closing. Right ? I would find useful to consider this issue as this makes the Live version of the protocol non-deterministic, which a pain for testing. Consider a file being streamed over SRT in Live mode. I would also enhance the documentation about linger not being compatible with TSBPD mode. Regards |
It's not "that" shutdown, I meant the "shutdown" like the one done on the TCP socket (see The buffer you are talking about is there indeed, it's the receiver buffer, and yes, in the live mode some part of this buffer is also used to keep packets for which the play time didn't come yet, and they stay there until it does, about which the TSBPD thread decides. But normally in the live mode you expect to keep a very low latency and if your transmission is going to terminate it usually doesn't matter if it terminates now or in the next half a second. A completely different case is when the transmission should end because there's nothing more to send - in such a case you should simply keep the connection open even without sending any data, and keep it this way for enough time to read the remaining pictures. And how to handle the end-of-transmission and display it correctly to the user, it's the application's problem. This means that:
So, I can understand that the application may want to read all the remaining data when the "transmission is ended", so that all sent pictures and sound up to the very end is retrieved and played. But if the transmission is unexpectedly terminated, this isn't the case. |
I was only concerned about the case where the connection is closed intentionally. Sadly, in this situation, I am the receiving peer, and I believe most of the tools that implement the SRT output won't perform this "quiet period", simply because they expect the protocol to handle this. They send the data, then call Thank you for taking the time to answer the question. |
Feature Request
|
Hello,
We've observed strange behavior at the end of an SRT connection where the data has been received by the receiving socket, but not delivered to the application.
We traced it back to the TSBPD mode that just stop doing anything as soon as the socket has been marked as closing.
I would have expected the
srto_linger
option to be able to handle this, but this is not the case.Additionally, the documentation states that
srto_linger
option is disabled when using the "live" transtype mode, because "In this type of workflow there is no point for wait for all the data to be delivered after a connection is closed.". I am not sure to fully understand the point. I find it could be useful to be able read all the data received by the receiving socket (assuming those data are not late..) regardless whether the sending peer has closed. No user would want the credits to be cut-off.The text was updated successfully, but these errors were encountered: