ABR: Try reducing the frequency of potential quality switches #1237
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
EDIT: After some tests, I decided to also limit switch in the buffer-based algorithm (based on BOLA) through special mechanisms.
We noticed that some final users of the RxPlayer had a lot of quality switches: e.g. the video quality raised and lowered frequently (in a few seconds).
In most cases, this lead to poor experience, on some devices, such as some Samsung TVs, it also seems to create some visual glitches.
Consequently we decided to propose a development that we long though about: trading higher quality stability for less perceived quality accuracy, especially by being more on the pessimistic-side to prevent rebuffering while doing that.
Here are the exact changes proposed in this PR:
Everytime we raise up in quality, we now do it non-urgently, meaning that we wait for the segment requests of the lower quality to have finished before starting loading segments in the better one (as opposed to interrupting directly requests and starting the new one directly, or what is known as an "urgent" switch, still enabled when the quality has to be lowered).
Pros: We prevent a risk to slowly emptying the buffer due to multiple urgent switches preventing to finish a request. We ensure that data in a very maintainable quality is at least pushed and playable before going into more risky territory.
Cons: Quality rises may be less rapidly visible.
We use a "factor" that is multiplied with our network bandwidth calculation to know which quality we should play. Previously its value depended on the size of the current available buffer but was between
0.72
and0.8
. It is now always0.72
Pros: We limit the risk to have quality switch due to the factor changing (itself due to the buffer being close to starvation). We are more pessimistic in general which generally means less rebuffering risks.
Cons: We are more pessimistic in general which generally means we might have a lower quality when the higher one could be played if our bandwidth is not considered as suffliciently high
We have a logic detecting sudden fall in bandwidth. Previously, if we had not much data left in the buffer AND if the pending request for the next needed segment led us to think that we will rebuffer for at least 2 seconds (there are complex conditions involved, we basically rely on progress reporting through the
XMLHttpRequest
andfetch
web API here), we quickly re-calculated a bandwidth based on that request's progress only.Now, we also check that the request started since at least
3s <= duration of a segment * 1.5 <= 12s
and if the estimated rebuffering time is2.5s
.Pros: Less risk of falling in quality due to a single request being longer than expected, which lead to a very poor bandwidth calculation.
Cons: Real sudden fall in bandwidth detected more slowly
I also put in the config the steps at which we can enter and exit the buffer-based logic, to allow future testing of different values inside applications and see if it has a sensible effect.