New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive bandwidth if speed exceeds connection speed #52

Open
john-lightstream opened this Issue Jun 27, 2017 · 26 comments

Comments

Projects
None yet
6 participants
@john-lightstream

john-lightstream commented Jun 27, 2017

Whenever the bandwidth being sent (the packetized data + potential retransmissions) exceeds the users maximum connection speed, it goes into a bit of a nutty cycle (sending out massive amounts of retransmission requests which further leads to more packet loss) that seems to ignore the MAXBW option set on SRT.

Is there a setting that I'm missing here that would allow me to more gracefully handle issues where the bandwidth in use exceeds the max bandwidth?

An example is a user has a 5mbps connection. If the user ends up sending 7mbps, the retransmissions caused by the dropped packets end up skyrocketing to 20-40mbps out. This further causes more packets to be dropped and eventually all active connections on that machine are dropped. My assumption was that the MAXBW would also apply to retransmissions to prevent it from exhibiting this behavior.

Setting MAXBW and OHEADBW both do not seem to have any affect on this behavior. Everything works extremely well as long as we are below the users maximum bandwidth.

Thanks a lot!

These are the main settings we use on client/server:

bool tsbpd_mode = true;
int tsbpd_delay = 400;
bool tlpktdrop_mode = true;

srt_setsockopt(sock.s.srt, 0, SRTO_TSBPDMODE, &tsbpd_mode, sizeof(bool));
srt_setsockopt(sock.s.srt, 0, SRTO_TSBPDDELAY, &tsbpd_delay, sizeof(int));
srt_setsockopt(sock.s.srt, 0, SRTO_TLPKTDROP, &tlpktdrop_mode, sizeof(bool));

We send everything with srt_sendmsg()

@alexpokotilo

This comment has been minimized.

Show comment
Hide comment
@alexpokotilo

alexpokotilo Jun 28, 2017

Contributor

What is SRTO_TSBPDMODE by the way ? When I should use it ?

Contributor

alexpokotilo commented Jun 28, 2017

What is SRTO_TSBPDMODE by the way ? When I should use it ?

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 29, 2017

Collaborator

@alexpokotilo : ALWAYS. Actually without setting this option, you don't have SRT (the extra messages that implement currently the SRT Handshake extension are initiated by the party that is set SRTO_SENDER and only of SRTO_TSBPDMODE is set). This will change with the "integrated handshake", when the SRT Handshake extension will be attached to the UDT handshake structures, although this will still allow you to not have TSBPD mode, or at least to have it only in one direction.

Collaborator

ethouris commented Jun 29, 2017

@alexpokotilo : ALWAYS. Actually without setting this option, you don't have SRT (the extra messages that implement currently the SRT Handshake extension are initiated by the party that is set SRTO_SENDER and only of SRTO_TSBPDMODE is set). This will change with the "integrated handshake", when the SRT Handshake extension will be attached to the UDT handshake structures, although this will still allow you to not have TSBPD mode, or at least to have it only in one direction.

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 29, 2017

Collaborator

@john-lightstream : I'll let others to speak more details (maybe there's something to be concerned about the bandwidth measurement), but just in general - keep in mind that this is a tool for sending a live stream. This means, in short, that either you have a timely delivery, or it doesn't make sense at all. I don't exactly understand whether you just set too high MAXBW (in this case SRT should be able to handle it), or just send more data than the network can handle. If it's the latter one, you'll simply end up with swelling the input buffer on the sender side. Various things may happen in response to that situation, but none is anyhow good, and they mean either dropping the data from transmission or simply breaking the connection.

Collaborator

ethouris commented Jun 29, 2017

@john-lightstream : I'll let others to speak more details (maybe there's something to be concerned about the bandwidth measurement), but just in general - keep in mind that this is a tool for sending a live stream. This means, in short, that either you have a timely delivery, or it doesn't make sense at all. I don't exactly understand whether you just set too high MAXBW (in this case SRT should be able to handle it), or just send more data than the network can handle. If it's the latter one, you'll simply end up with swelling the input buffer on the sender side. Various things may happen in response to that situation, but none is anyhow good, and they mean either dropping the data from transmission or simply breaking the connection.

@ethouris ethouris added the question label Jun 29, 2017

@john-lightstream

This comment has been minimized.

Show comment
Hide comment
@john-lightstream

john-lightstream Jun 29, 2017

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 29, 2017

Collaborator

@john-lightstream : I understand what you're trying to do, but when the application starts dropping packets, it's already way too late to do anything. If you want to control the bandwidth usage on the fly, the best thing you can do is to use statistical information to know what's going on with the transmission. The problem with the swelling input buffer is that when you have that in your application, the result will be to slowdown the data sending process, and this leads to have actually inaccurate timestamps in the packets at the input side. The normal approach for sending a live stream is that time to send the data is synchronized with the time in the stream's timestamps. But this will only work if you don't have any other delays - and when your input buffer swells, you'll have delays (in blocking mode) on the call to srt_sendmsg, which influences the time when this function is called next time. When this happens, the timestamps on the input side are already inaccurate.

That's why you should not allow that your input buffer swell, and monitoring the statistics is the best way to avoid that the problem grow up and lead further to data loss or connection breaking.

Collaborator

ethouris commented Jun 29, 2017

@john-lightstream : I understand what you're trying to do, but when the application starts dropping packets, it's already way too late to do anything. If you want to control the bandwidth usage on the fly, the best thing you can do is to use statistical information to know what's going on with the transmission. The problem with the swelling input buffer is that when you have that in your application, the result will be to slowdown the data sending process, and this leads to have actually inaccurate timestamps in the packets at the input side. The normal approach for sending a live stream is that time to send the data is synchronized with the time in the stream's timestamps. But this will only work if you don't have any other delays - and when your input buffer swells, you'll have delays (in blocking mode) on the call to srt_sendmsg, which influences the time when this function is called next time. When this happens, the timestamps on the input side are already inaccurate.

That's why you should not allow that your input buffer swell, and monitoring the statistics is the best way to avoid that the problem grow up and lead further to data loss or connection breaking.

@alexpokotilo

This comment has been minimized.

Show comment
Hide comment
@alexpokotilo

alexpokotilo Jun 29, 2017

Contributor

@ethouris you mean that if I don't set SRTO_TSBPDMODE connection is binary compatible with UDT and no any SRT specific logic works ?

Contributor

alexpokotilo commented Jun 29, 2017

@ethouris you mean that if I don't set SRTO_TSBPDMODE connection is binary compatible with UDT and no any SRT specific logic works ?

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 29, 2017

Collaborator

@alexpokotilo : worse - the connection isn't fully binary compatible with UDT (we have employed some of bits in PH_MSGNO field in the header for some special flags), some of the SRT specific logic may work (can't tell you for sure how much has changed here towards UDT), just you won't have the traced latency, nor any extra features that are turned on by flags exchanged in the SRT handshake extension. With encryption it's even worse - the sending side will encrypt the packets, but the receiving side can't decrypt them.

This will be different with HSv5 (integrated handshake) - you may have TSBPD turned off in particular party, but the SRT handshake extension will be still exchanged, and this setting will only matter if that party is going to receive the data or set minimum latency for the peer receiver. I'm almost done with this work, just fighting now the rendezvous part...

Collaborator

ethouris commented Jun 29, 2017

@alexpokotilo : worse - the connection isn't fully binary compatible with UDT (we have employed some of bits in PH_MSGNO field in the header for some special flags), some of the SRT specific logic may work (can't tell you for sure how much has changed here towards UDT), just you won't have the traced latency, nor any extra features that are turned on by flags exchanged in the SRT handshake extension. With encryption it's even worse - the sending side will encrypt the packets, but the receiving side can't decrypt them.

This will be different with HSv5 (integrated handshake) - you may have TSBPD turned off in particular party, but the SRT handshake extension will be still exchanged, and this setting will only matter if that party is going to receive the data or set minimum latency for the peer receiver. I'm almost done with this work, just fighting now the rendezvous part...

@alexpokotilo

This comment has been minimized.

Show comment
Hide comment
@alexpokotilo

alexpokotilo Jun 29, 2017

Contributor

With encryption it's even worse - the sending side will encrypt the packets, but the receiving side can't decrypt them.

I don't set TSBPD mode in my ticket here #26 but receiver decrypt messages after some time.
So I should set this mode on both sender and received to make it work right ?
What does TSBPD acronym mean ?
I would rather read any solid explanation if exists and stop bothering your and others. I really need to get what is it.
Thanks a lot !

Contributor

alexpokotilo commented Jun 29, 2017

With encryption it's even worse - the sending side will encrypt the packets, but the receiving side can't decrypt them.

I don't set TSBPD mode in my ticket here #26 but receiver decrypt messages after some time.
So I should set this mode on both sender and received to make it work right ?
What does TSBPD acronym mean ?
I would rather read any solid explanation if exists and stop bothering your and others. I really need to get what is it.
Thanks a lot !

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 29, 2017

Collaborator

Ah, ok. In HSv4 the initiator for the SRT handshake extension is the sender - and setting the sender ensures that the HSREQ and KMREQ are actually being sent (and responded). If TSBPD is not set, then this will be simply the appropriate flags cleared in the information interchanged in this process.

TSBPD stands for Timestamp-Based Packet Delivery. This is generally the mechanism of applying a delay for a packet at the delivery side so that the relative time when they are delivered to the application is proportional to the time when they were requested to be sent by the sender application. This delay causes an extra "wasted time" if the packet reached the destination quickly (and therefore it's way ahead of "time to play"), but when it was lost, this gives it an extra time to recover the packet before its "time to play" comes. Thanks to that there's largely increased the chance that the packet will be "played" (delivered to the application) with the same relative time towards other packets as on the sender side.

Of course, in extreme cases it means that in the "last resort", that is, time has come for the next packet to play and the previous one is still not recovered, the lost packet is agreed to be lost in order not to compromise the timely delivery of the already received packets (TLPKTDROP - Too-Late Packet Drop). Two other mechanisms, NAKREPORT (periodic NAK report, sending lossreport again and again, until the packet is recovered, with small time step), and FASTREXMIT (fast retransmission, the sender is retransmitting packets even if they are not reported as lost, but still not ACK-nowledged on time) are used to decrease the probability that the need to use TLPKTDROP ever happens.

Collaborator

ethouris commented Jun 29, 2017

Ah, ok. In HSv4 the initiator for the SRT handshake extension is the sender - and setting the sender ensures that the HSREQ and KMREQ are actually being sent (and responded). If TSBPD is not set, then this will be simply the appropriate flags cleared in the information interchanged in this process.

TSBPD stands for Timestamp-Based Packet Delivery. This is generally the mechanism of applying a delay for a packet at the delivery side so that the relative time when they are delivered to the application is proportional to the time when they were requested to be sent by the sender application. This delay causes an extra "wasted time" if the packet reached the destination quickly (and therefore it's way ahead of "time to play"), but when it was lost, this gives it an extra time to recover the packet before its "time to play" comes. Thanks to that there's largely increased the chance that the packet will be "played" (delivered to the application) with the same relative time towards other packets as on the sender side.

Of course, in extreme cases it means that in the "last resort", that is, time has come for the next packet to play and the previous one is still not recovered, the lost packet is agreed to be lost in order not to compromise the timely delivery of the already received packets (TLPKTDROP - Too-Late Packet Drop). Two other mechanisms, NAKREPORT (periodic NAK report, sending lossreport again and again, until the packet is recovered, with small time step), and FASTREXMIT (fast retransmission, the sender is retransmitting packets even if they are not reported as lost, but still not ACK-nowledged on time) are used to decrease the probability that the need to use TLPKTDROP ever happens.

@heidabyr

This comment has been minimized.

Show comment
Hide comment
@heidabyr

heidabyr Jun 29, 2017

Collaborator

Additional comment from Jean, the main developer behind it: "TSBPD stands for Timestamp-Based Packet Delivery and is the main addition to UDT to handle real-time stream. It has nothing to do with encryption. If you have heavy packet loss, the initial keying material control messages were probably lost and the receiver does not know how to decrypt. TSBPDMODE must be set on both sides to be active."

Collaborator

heidabyr commented Jun 29, 2017

Additional comment from Jean, the main developer behind it: "TSBPD stands for Timestamp-Based Packet Delivery and is the main addition to UDT to handle real-time stream. It has nothing to do with encryption. If you have heavy packet loss, the initial keying material control messages were probably lost and the receiver does not know how to decrypt. TSBPDMODE must be set on both sides to be active."

@heidabyr

This comment has been minimized.

Show comment
Hide comment
@heidabyr

heidabyr Jun 29, 2017

Collaborator

Ok, there seems to be some confusion around the default settings! Glad you brought that to my attention, @alexpokotilo. I'll start a review process on this.

Collaborator

heidabyr commented Jun 29, 2017

Ok, there seems to be some confusion around the default settings! Glad you brought that to my attention, @alexpokotilo. I'll start a review process on this.

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 29, 2017

Collaborator

@john-lightstream : Ok, after some consulting, I have some question, as your test description wasn't enough clear to me.

There are actually three values concerning the bandwidth:

  • the NETWORK bandwidth (maximum bandwidth capability of the network)
  • the DATA bandwidth (the actual data sending speed)
  • the MAXBW bandwidth (as set by SRTO_MAXBW)

And my question is what are the values of all these three in the test, in which you have observed the data transfer to worsen in time until the final connection drop. I understand that you had DATA bandwidth = 7Mbps, NETWORK bandwidth = 5Mbps, what did you set as MAXBW, and how do things change with the various settings of MAXBW?

AFAIR the expected behavior is that MAXBW limits the speed of the UDP packet transfer, that's more-less all. If you then try to send more data than MAXBW allows, the result should be to congest the data at the input buffer, not overflowing the network with rexmissions. That's why I'm interested if you set MAXBW to the value below the network capabilities or above.

Collaborator

ethouris commented Jun 29, 2017

@john-lightstream : Ok, after some consulting, I have some question, as your test description wasn't enough clear to me.

There are actually three values concerning the bandwidth:

  • the NETWORK bandwidth (maximum bandwidth capability of the network)
  • the DATA bandwidth (the actual data sending speed)
  • the MAXBW bandwidth (as set by SRTO_MAXBW)

And my question is what are the values of all these three in the test, in which you have observed the data transfer to worsen in time until the final connection drop. I understand that you had DATA bandwidth = 7Mbps, NETWORK bandwidth = 5Mbps, what did you set as MAXBW, and how do things change with the various settings of MAXBW?

AFAIR the expected behavior is that MAXBW limits the speed of the UDP packet transfer, that's more-less all. If you then try to send more data than MAXBW allows, the result should be to congest the data at the input buffer, not overflowing the network with rexmissions. That's why I'm interested if you set MAXBW to the value below the network capabilities or above.

@john-lightstream

This comment has been minimized.

Show comment
Hide comment
@john-lightstream

john-lightstream Jun 29, 2017

@ethouris

Here is a test I just ran:

NETWORK = 500kbit (62500 bytes/s)
DATA = 1000kbit (125000 bytes/s)
MAXBW = 500kbit (62500 bytes/s)

In this situation, within 10 seconds SRT is attempting to send 12mbps at which point it loses connection to SRT due to timeout as well as most other connections on the machine. I guess my expectation was high packet loss within the bounds of the MAXBW setting. Would the best way to monitor this be very actively checking stats looking at packet loss in the period? Unfortunately it has some momentum (it takes a while after shutting down the connection for it to stop trying to send the data and close gracefully).

QUEUE -- 144047 / 12058624
[2017-06-29 12:29:37.981] [info] Average sendmsg per second 80.4
[2017-06-29 12:29:37.982] [info] Queue 39.99 fps (603.5 kbps), SRT actual (1611.0 kbps), transport send queue fullness 1.2, max frame size 75017
QUEUE -- 752838 / 12058624
[2017-06-29 12:29:42.987] [info] Average sendmsg per second 102.0
[2017-06-29 12:29:42.988] [info] Queue 49.97 fps (790.1 kbps), SRT actual (7434.7 kbps), transport send queue fullness 6.2, max frame size 78599
QUEUE -- 1225628 / 12058624
[2017-06-29 12:29:47.998] [info] Average sendmsg per second 105.4
[2017-06-29 12:29:47.999] [info] Queue 53.34 fps (817.7 kbps), SRT actual (12339.0 kbps), transport send queue fullness 10.2, max frame size 78486

@ethouris

Here is a test I just ran:

NETWORK = 500kbit (62500 bytes/s)
DATA = 1000kbit (125000 bytes/s)
MAXBW = 500kbit (62500 bytes/s)

In this situation, within 10 seconds SRT is attempting to send 12mbps at which point it loses connection to SRT due to timeout as well as most other connections on the machine. I guess my expectation was high packet loss within the bounds of the MAXBW setting. Would the best way to monitor this be very actively checking stats looking at packet loss in the period? Unfortunately it has some momentum (it takes a while after shutting down the connection for it to stop trying to send the data and close gracefully).

QUEUE -- 144047 / 12058624
[2017-06-29 12:29:37.981] [info] Average sendmsg per second 80.4
[2017-06-29 12:29:37.982] [info] Queue 39.99 fps (603.5 kbps), SRT actual (1611.0 kbps), transport send queue fullness 1.2, max frame size 75017
QUEUE -- 752838 / 12058624
[2017-06-29 12:29:42.987] [info] Average sendmsg per second 102.0
[2017-06-29 12:29:42.988] [info] Queue 49.97 fps (790.1 kbps), SRT actual (7434.7 kbps), transport send queue fullness 6.2, max frame size 78599
QUEUE -- 1225628 / 12058624
[2017-06-29 12:29:47.998] [info] Average sendmsg per second 105.4
[2017-06-29 12:29:47.999] [info] Queue 53.34 fps (817.7 kbps), SRT actual (12339.0 kbps), transport send queue fullness 10.2, max frame size 78486

@alexpokotilo

This comment has been minimized.

Show comment
Hide comment
@alexpokotilo

alexpokotilo Jun 30, 2017

Contributor

Hi,
I got that SRTO_TSBPDMODE should be set on both sides.
What about :
SRTO_TSBPDDELAY
TLPKTDROP
Should I set these parameter only for receiver or for sender too ?
BTW I found following code in core.cpp
case SRT_TSBPDMAXLAG:
//Obsolete
break;
This means SRTO_TSBPDMAXLAG is obsolete. But there is description about it in srt.h and nothing about the fact it obsoletes. Should I file new pull request and add comment in srt.h that this parameter is obsolete ?

Contributor

alexpokotilo commented Jun 30, 2017

Hi,
I got that SRTO_TSBPDMODE should be set on both sides.
What about :
SRTO_TSBPDDELAY
TLPKTDROP
Should I set these parameter only for receiver or for sender too ?
BTW I found following code in core.cpp
case SRT_TSBPDMAXLAG:
//Obsolete
break;
This means SRTO_TSBPDMAXLAG is obsolete. But there is description about it in srt.h and nothing about the fact it obsoletes. Should I file new pull request and add comment in srt.h that this parameter is obsolete ?

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 30, 2017

Collaborator

@alexpokotilo : Ok, I can excuse myself by the fact that I made this srt.h very quickly and I didn't verify all the information inside. The important thing for creating this API was that people will use it, and any further changes we can do must maintain backward compatibility. This TSBPDMAXLAG might have been probably some feature that was later removed or it was considered not a good idea.

The TLPKTDROP, and RCVNAKREPORT options are on by default, although TLPKTDROP will be ineffective without TSBPDMODE.

What you need to set is SRTO_TSBPDMODE on both sides, and SRTO_TSBPDDELAY must have some reasonable value (we believe it's 125). We are currently considering making these two values default.

Collaborator

ethouris commented Jun 30, 2017

@alexpokotilo : Ok, I can excuse myself by the fact that I made this srt.h very quickly and I didn't verify all the information inside. The important thing for creating this API was that people will use it, and any further changes we can do must maintain backward compatibility. This TSBPDMAXLAG might have been probably some feature that was later removed or it was considered not a good idea.

The TLPKTDROP, and RCVNAKREPORT options are on by default, although TLPKTDROP will be ineffective without TSBPDMODE.

What you need to set is SRTO_TSBPDMODE on both sides, and SRTO_TSBPDDELAY must have some reasonable value (we believe it's 125). We are currently considering making these two values default.

@heidabyr

This comment has been minimized.

Show comment
Hide comment
@heidabyr

heidabyr Jun 30, 2017

Collaborator

Just to clarify: The SRT Latency setting (set through the SRTO_TSBPDDELAY parameter) should be set to 4 times the RTT of the link, for low latency optimization on good quality links to a minimum of 2.5 times the RTT, but never lower than 120ms.
Example: Our Rendsburg (Germany) office to Chicago has an RTT of about 110ms, so we would use a latency setting of 440ms when sending SRT streams between the offices.

Collaborator

heidabyr commented Jun 30, 2017

Just to clarify: The SRT Latency setting (set through the SRTO_TSBPDDELAY parameter) should be set to 4 times the RTT of the link, for low latency optimization on good quality links to a minimum of 2.5 times the RTT, but never lower than 120ms.
Example: Our Rendsburg (Germany) office to Chicago has an RTT of about 110ms, so we would use a latency setting of 440ms when sending SRT streams between the offices.

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 30, 2017

Collaborator

@john-lightstream : Ok, this sounds weird, although I would suspect that probably if you send the data very closely to the network's limit, it's very easy to cross the border just occasionally, and once this is done, you get the first packet loss and rexmit, which actually increases the network usage, and this causes a "positive feedback", leading to increasing network usage and loss rate very quickly.

We'll perform more tests around this once we're done with the present high priority work.

Collaborator

ethouris commented Jun 30, 2017

@john-lightstream : Ok, this sounds weird, although I would suspect that probably if you send the data very closely to the network's limit, it's very easy to cross the border just occasionally, and once this is done, you get the first packet loss and rexmit, which actually increases the network usage, and this causes a "positive feedback", leading to increasing network usage and loss rate very quickly.

We'll perform more tests around this once we're done with the present high priority work.

@ethouris ethouris added the pending label Jun 30, 2017

@alexpokotilo

This comment has been minimized.

Show comment
Hide comment
@alexpokotilo

alexpokotilo Jun 30, 2017

Contributor

I'm sorry. Should I set SRTO_TSBPDDELAY on both sides or only on one side ? What if values on sending and receiving side will be different.

Contributor

alexpokotilo commented Jun 30, 2017

I'm sorry. Should I set SRTO_TSBPDDELAY on both sides or only on one side ? What if values on sending and receiving side will be different.

@alexpokotilo

This comment has been minimized.

Show comment
Hide comment
@alexpokotilo

alexpokotilo Jun 30, 2017

Contributor

@ethouris I have filed pull request to mark SRTO_TSBPDMAXLAG obsolete in srt.h explicitly
#53
please answer my previous question:

I'm sorry. Should I set SRTO_TSBPDDELAY on both sides or only on one side ? What if values on sending and receiving side will be different.

Contributor

alexpokotilo commented Jun 30, 2017

@ethouris I have filed pull request to mark SRTO_TSBPDMAXLAG obsolete in srt.h explicitly
#53
please answer my previous question:

I'm sorry. Should I set SRTO_TSBPDDELAY on both sides or only on one side ? What if values on sending and receiving side will be different.

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jun 30, 2017

Collaborator

Ok, I need to clarify some things:

  1. TSBPDMAXLAG - thanx, this should be removed indeed.
  2. SRTO_TSBPDDELAY is trying to be now renamed to SRTO_LATENCY (currently these should be two symbols with the same value).

So, in the current implementation - which supports the unidirectional mode, the latency value can be set on both sender and the receiver side. The effective latency will be the maximum value of these two. So, you can safely set SRTO_LATENCY to zero on one side - but still SRTO_TSBPDMODE must be set on both.

Collaborator

ethouris commented Jun 30, 2017

Ok, I need to clarify some things:

  1. TSBPDMAXLAG - thanx, this should be removed indeed.
  2. SRTO_TSBPDDELAY is trying to be now renamed to SRTO_LATENCY (currently these should be two symbols with the same value).

So, in the current implementation - which supports the unidirectional mode, the latency value can be set on both sender and the receiver side. The effective latency will be the maximum value of these two. So, you can safely set SRTO_LATENCY to zero on one side - but still SRTO_TSBPDMODE must be set on both.

@john-lightstream

This comment has been minimized.

Show comment
Hide comment
@john-lightstream

john-lightstream Jun 30, 2017

@ethouris
Found the issue. Was mistakenly using an int for maxbw instead of int64_t so it was snagging random data that typically was negative and caused it to go into 'infinite (30mbps)' mode. Which was why I kept seeing max out around that level. Thanks for the prompt attention and sorry for the false-positive.

It might be useful to detail the data types expected in the srt.h. I can do a PR that details each type in the comment if that helps, or add a markdown that expands the UDT documentation and adds the SRT specific options.

@ethouris
Found the issue. Was mistakenly using an int for maxbw instead of int64_t so it was snagging random data that typically was negative and caused it to go into 'infinite (30mbps)' mode. Which was why I kept seeing max out around that level. Thanks for the prompt attention and sorry for the false-positive.

It might be useful to detail the data types expected in the srt.h. I can do a PR that details each type in the comment if that helps, or add a markdown that expands the UDT documentation and adds the SRT specific options.

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Jul 3, 2017

Collaborator

Well, good point, that's unfortunately part of the UDT4 legacy, where the setsockopt function was based on the POSIX function of the same name - including the completely ignored level parameter. The code definitely lacks checking of whether at least the declared buffer size corresponds with the required type. Actually the facilities that I provided in socketoptions.hpp are much safer because there's always the correct type used, while it converts the string to appropriate type by itself.

Collaborator

ethouris commented Jul 3, 2017

Well, good point, that's unfortunately part of the UDT4 legacy, where the setsockopt function was based on the POSIX function of the same name - including the completely ignored level parameter. The code definitely lacks checking of whether at least the declared buffer size corresponds with the required type. Actually the facilities that I provided in socketoptions.hpp are much safer because there's always the correct type used, while it converts the string to appropriate type by itself.

@thepocho

This comment has been minimized.

Show comment
Hide comment
@thepocho

thepocho Oct 13, 2017

@john-lightstream

Hello how are you?

I am having a problem and I need to configure the SRTO_MAXBW.
In command line, where do you configure the SRTO_MAXBW?

Thanks for your help.

@john-lightstream

Hello how are you?

I am having a problem and I need to configure the SRTO_MAXBW.
In command line, where do you configure the SRTO_MAXBW?

Thanks for your help.

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Oct 13, 2017

Collaborator

In command line it's possible with an application that uses URI parsing - like stransmit.

In the SRT URI you have to add options to be set on a socket in the "attributes". Beside some "special options" interpreted by the application itself, all options for SRT have names transformed by removing the SRTO_ prefix and making it lowercase. For example:

./stransmit udp://239.255.254.253:1234 srt://212.121.112.1:1212?maxbw=3000000
Collaborator

ethouris commented Oct 13, 2017

In command line it's possible with an application that uses URI parsing - like stransmit.

In the SRT URI you have to add options to be set on a socket in the "attributes". Beside some "special options" interpreted by the application itself, all options for SRT have names transformed by removing the SRTO_ prefix and making it lowercase. For example:

./stransmit udp://239.255.254.253:1234 srt://212.121.112.1:1212?maxbw=3000000
@thepocho

This comment has been minimized.

Show comment
Hide comment
@thepocho

thepocho Oct 13, 2017

Thank you,
My last question, should concatenated commands work?
For example:
./stransmit udp: //239.255.254.253:1234 srt: //212.121.112.1: 1212? maxbw = 3000000? latency = 1000
Would this work?

Thanks

Thank you,
My last question, should concatenated commands work?
For example:
./stransmit udp: //239.255.254.253:1234 srt: //212.121.112.1: 1212? maxbw = 3000000? latency = 1000
Would this work?

Thanks

@ethouris

This comment has been minimized.

Show comment
Hide comment
@ethouris

ethouris Oct 13, 2017

Collaborator

This is a standard URI format: scheme://host:port?param1=value1&param2=value2&param3=value3

NO SPACES in between!

The URI standard contains also some more possible items in the format, but these are the only used ones.

Collaborator

ethouris commented Oct 13, 2017

This is a standard URI format: scheme://host:port?param1=value1&param2=value2&param3=value3

NO SPACES in between!

The URI standard contains also some more possible items in the format, but these are the only used ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment