-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workaround fix to handle 416 #1071
Comments
I think this question is not framed the same way I think about it. My mental model:
In #1032 (comment) you said:
I'm not 100% sure what you mean the 'actual total length' of your content is in this example, whether it's
Looking at the code it's not obvious to me that we do assume the media/libraries/datasource/src/main/java/androidx/media3/datasource/HttpUtil.java Lines 74 to 80 in b930b40
The conclusion in #1032 seems to be that the manifest of this media is malformed, which is what led to these out-of-bounds requests? If that's the case, there's little ExoPlayer can do - the media has resulted in us requesting data that the server can't satisfy. I'm not really sure what different behaviour you want ExoPlayer to do here - if the |
Hey @icbaker, thank you so much for looking into it and really appreciate the detailed response. To answer your questions:
|
|
A major question is what kind of situation cause exoplayer to make unbounded requests (i.e. It's clear that it's "when exoplayer does not know the length," but what causes that to happen? Does exoplayer rely on the mpd to determine the length, the sidx atom, or the Content-Range header? If the Content-Range header, is this simply the first one to be returned, or does it continue to update this with each 206 response? (If most were correct, but one were missing or broken, would this confuse it?) Would an MPD corruption do this? What kind of corruption would cause this but not break playback? I am certain that in some cases, cloudfront is "hanging-up" on exoplayer without sending any response, leading eventually to a socket timeout. This is a situation we're working on, but would it cause this kind of problem? (My attempts to do this intentionally have not reproduced the problem.) We believe this occurs most with users who have a bad network connection. It is possible that segment downloads are interrupted. Would this cause exoplayer to lose track of the length? (How does it keep track of this between launches?) I can reproduce our 416 errors by intentionally replacing the Content-Range length with |
This is basically what I answered above: Because the
This can also be phrased as "when does ExoPlayer pass a
I'm afraid I'm not familiar enough with the DASH or caching parts of ExoPlayer to answer this question confidently - possibly @marcbaechinger can help more here. |
I agree. This is an equivalent question. My expectation is that this happens somewhere between exoplayer itself and its cache, not inside of DataSource. |
@icbaker thanks for looking into it. When you say layers up to the player above, do you mean |
Pushing up for awareness, @marcbaechinger would you be able to help us here |
Environment Used:
1.1.1
/ ExoPlayer:2.19.1
Questions:
SHOULD
notMUST
returnContentRange
in header, however some endpoints do not respect the semantics.Background:
ContentRange
in header.Request/Headers/Range=bytes=108421975-
which doesn't contain length, and the position here 108421975 is one byte extra than it's actual total length. We had multiple records with 416 and they all have this same pattern.
CacheWriter
andDefaultHttpDataSource
provided by ExoPlayerSimpleCache
problem but I see other clients also face a similar caching issueDataSpec
extends beyond the end of the underlying resource.startingByte
as one byte extra thancontentLength
8s
and still not able to repro 416.Important Points to note:
Any help on this would be greatly appreciated since our bug is the top download error and users have a really bad experience of being stuck in 416 state
The text was updated successfully, but these errors were encountered: