-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ThroughputRule wrong calculation and duplicate loads of same fragment #1514
Comments
The mismeasurements in the throughput rule are side effect due to the duplicated loads, second from cache. The duplicated loads are from this line ``js
|
Also we should ignore cached chunks in average throughput measurement in the abr rules.... Maybe @LloydW93 has some code for this? |
index 894 startTime 1469745816 |
@sebastien4 If you would like a quick fix let's try this. return !isNaN(req1.index) && (req1.startTime === req2.startTime) && (req1.adaptationIndex === req2.adaptationIndex); In addition to it fixing the dublicate loads, I would like to verify that the ABR rules normalize. You should not see traces like this When I say quick fix, there may be a better fix in the DashHandler but we are looking at major changes there so that work will resolve this issue altogether. I will make sure it does. |
@dsparacio Indeed, no more duplicate loads of chunks, and Throughput rule seems to work fine now. Let's see this in production now.. Thank you. |
Fixed with PR #1523 |
@AkamaiDASH Still complaints from our users about rebufferings, whatever their available bandwidth. Not sure the quick fix solved all rebuffering issues but at least it solved duplicate loads. There's sth going wrong with ABR algorithm and our live streams (liveDelay and stableBufferTime set to 30).. |
@sebastien4 I need more info are you still seeing estimations that are way off. Could be the cache issue @LloydW93 spoke of. I will look to add a simple block of code that will ignore frags that are cache from skewing the average throughput. |
No, estimations are correct. There must be sth wrong in my MPD settings.. |
Closing this issue. Tracking knownThroughput issues in new issue #1530 |
Environment
As discussed with @LloydW93 on slack, we are experiencing rebuffering issues with slow connections around 2.5Mbps, while 2 highest bitrates available are 1.6 and 3Mbps.
Attached is a typical ThroughputRule output from the logs (rebuffering.txt).
Calculation is totally wrong sometimes.
Also I can see duplicate loads of the same fragment in Chrome network tab (see duplicate.png).
It happens on live streams, did not check on vod streams for now.
The text was updated successfully, but these errors were encountered: