You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Network condition: network bandwidth < video bitrate, either with bandwidth limitation, or high drop rate.
When send buffer is almost full, app blocking in epoll wait or sendmsg. The estimated input bandwidth is smaller than video bitrate. Then maxbw = inputbw(1 + overhead) is lowered again due to small inputbw. It's a vicious circle.
That's the case of estimated inputbw < video bitrate. When network condition recovered to normal, since application can accumulate some data in it's owner buffer, it will call sendmsg fastly. Then the estimated inputbw can be higher than video bitrate. For example, after remove the bandwidth limitation, we can observer higher sending rate.
I have tried to stop input bandwidth updates when the send buffer is almost full.
const bool almost_full = m_pSndBuffer->getCurrBufSize() > m_config.iSndBufSize * 0.9;
// Get auto-calculated input rate, Bytes per second
const int64_t inputbw = m_pSndBuffer->getInputRate();
/*
* On blocked transmitter (tx full) and until connection closes,
* auto input rate falls to 0 but there may be still lot of packet to retransmit
* Calling updateBandwidth with 0 sets maxBW to default BW_INFINITE (1 Gbps)
* and sendrate skyrockets for retransmission.
* Keep previously set maximum in that case (inputbw == 0).
*/
if (!almost_full && inputbw >= 0)
m_CongCtl->updateBandwidth(0, withOverhead(std::max(m_config.llMinInputBW, inputbw))); // Bytes/sec
It works when network condition going to bad, but doesn't work when network condition recovering to normal, since it can't break the "small output rate, small input rate" balance.
This is the expected behavior:
When network bandwidth < video bitrate, use what we can get, just like TCP.
When network condition recovered to normal, sending fast but don't introduce too much congestion.
The text was updated successfully, but these errors were encountered:
Thanks for reporting. IThe input BW estimation is in the backlog of the planned improvements.
Also correlates with #1910 for the behavior improvements in (heavily) congested networks.
As a note, there is also SRTO_MININPUTBW socket option which does not let to fall below a certain minimum.
E.g. if you know the target bitrate of the encoder, the value can be used as the minimum.
Thanks for reporting. IThe input BW estimation is in the backlog of the planned improvements.
Also correlates with #1910 for the behavior improvements in (heavily) congested networks.
As a note, there is also SRTO_MININPUTBW socket option which does not let to fall below a certain minimum.
E.g. if you know the target bitrate of the encoder, the value can be used as the minimum.
Thank you for the information! I have tried the min_input_bw option. The drawback is that it doesn't solve the congestion problem.
To consider:
What if the input bitrate is estimated as Bsnd / Tsnd where Bsnd is the number of bytes in the sender buffer, and Tsnd is the sender buffer timespan (timestamp of the latest packet minus timestamp of the oldest packet in the buffer).
Good efficient idea for smoothed input rate. But the lag on instant input rate would depend on RTT and may be affected by Too-Late-Packets-Drop. Send period being based on that, delay in case of quick rise may cause even more drops.
SRT configuration:
Network condition: network bandwidth < video bitrate, either with bandwidth limitation, or high drop rate.
When send buffer is almost full, app blocking in epoll wait or sendmsg. The estimated input bandwidth is smaller than video bitrate. Then maxbw = inputbw(1 + overhead) is lowered again due to small inputbw. It's a vicious circle.
That's the case of estimated inputbw < video bitrate. When network condition recovered to normal, since application can accumulate some data in it's owner buffer, it will call sendmsg fastly. Then the estimated inputbw can be higher than video bitrate. For example, after remove the bandwidth limitation, we can observer higher sending rate.
I have tried to stop input bandwidth updates when the send buffer is almost full.
It works when network condition going to bad, but doesn't work when network condition recovering to normal, since it can't break the "small output rate, small input rate" balance.
This is the expected behavior:
The text was updated successfully, but these errors were encountered: