Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExoPlayer is throwing 416 error #1032

Closed
lemondoglol opened this issue Jan 26, 2024 · 12 comments
Closed

ExoPlayer is throwing 416 error #1032

lemondoglol opened this issue Jan 26, 2024 · 12 comments
Assignees

Comments

@lemondoglol
Copy link

Question Description:
hi team, currently we are seeing users are getting download errors on our app and it is due to exoPlayer throwing 416.
So far, we cannot reproduce this issue in house. But we noticed that in cloudFront, when the 416 issue happens, we are seeing
Request/Headers/Range=bytes=108421975-
which doesn't contain length, and the position here 108421975 is one byte extra than it's actual total length. We had multiple records with 416 and they all have this same pattern.

We checked exoPlayer source code SegmentDownloader.download(...) and suspect the additional segment is being added during List<SegmentDownloader.Segment> segments = getSegments(dataSource, manifest, /* removing= */ false);, but that is just our guess. We also checked the manifest file, it also looks fine (otherwise more users would be impacted) (plz let us know if you need one of them, we can send you privately).

Could you guys take a look and let us know in what kind of circumstances that one byte extra segment would be added?

Also, another question, so our current temp fix is that right after exoPlayer merge the segments, we check the last segment to see if that is out of the bound, if it does, then we will dump that one to avoid 416. Do you guys see any potential issues for doing so?

Thanks in advance!

@lemondoglol lemondoglol changed the title ExoPlayer is getting 416 error ExoPlayer is throwing 416 error Jan 26, 2024
@lemondoglol
Copy link
Author

additional info, we are downloading mp4 Widevine content

@tonihei
Copy link
Collaborator

tonihei commented Jan 29, 2024

we are downloading mp4 Widevine content

I assume this means that the main class involved is ProgressiveDownloader and not SegmentDownloader (which is used for DASH/HLS). But you also mentioned "We also checked the manifest file", so I'm not sure if it's not about DASH after all?

Internally these classes redirect to CacheWriter which only downloads the blocks of memory that are not already in the cache. The open-ended request indicates it tries to download the 'remaining' bytes from the last cached position. I wonder if there is some problem when trying to cache media that was already fully cached (but not marked as completed with its full content length) and then the CacheWriter attempts to load the bytes starting one after the content length.

Could you try caching/downloading data that was already downloaded and/or share a typical stream involved in these errors with us? You can send it to android-media-github@google.com and report back here once you've done that.

@lemondoglol
Copy link
Author

thanks for replying, I mean DASH file and can confirm it is using SegmentDownloader.
Unfortunately, none of us can reproduce this 416 error in house. We were only seeing some users having this issue from CloudFront requests.

We were trying to read through getSegments(dataSource, manifest, /* removing= */ false) to see any potential issue, but it is difficult for us as we also can't reproduce issue. By knowing we can't reproduce, hence no log, do you have any other suggestions?

@tonihei
Copy link
Collaborator

tonihei commented Jan 30, 2024

When you provide a sample file that talks to the same CloudFront backend, we could try to reproduce that as well to test the hypothesis that something tries to download the same file again without knowing that it already has all bytes. Otherwise there is not much we can do without reproduction steps or further hints on the problem.

@ddiachkov
Copy link

ddiachkov commented Jan 30, 2024

I had a similar problem. In my case, the root cause was a corrupted react-native-video-cache cache file. Cloudfront does not strictly follow the spec :( HttpDataSource expects the 416 response to have Content-Range: bytes */<total length> header. Cloudfront doesn't do that, unfortunately.

I've "fixed" it by clearing the cache.

@lemondoglol
Copy link
Author

When you provide a sample file that talks to the same CloudFront backend, we could try to reproduce that as well to test the hypothesis that something tries to download the same file again without knowing that it already has all bytes. Otherwise there is not much we can do without reproduction steps or further hints on the problem.

hi tonihei, so to clarify, what sample file would you need so I can try to send to you? would that gonna be the manifest file that passed in here? getSegments(dataSource, manifest, /* removing= */ false)

@tonihei
Copy link
Collaborator

tonihei commented Jan 31, 2024

Yes, the manifest file would be ideal if you can share it because it contains all the necessary information.

@lemondoglol
Copy link
Author

Yes, the manifest file would be ideal if you can share it because it contains all the necessary information.

hi tonihei, just attached the manifest file and sent to android-media-github@google.com

@shanujshekhar
Copy link

I had a similar problem. In my case, the root cause was a corrupted react-native-video-cache cache file. Cloudfront does not strictly follow the spec :( HttpDataSource expects the 416 response to have Content-Range: bytes */<total length> header. Cloudfront doesn't do that, unfortunately.

I've "fixed" it by clearing the cache.

@ddiachkov I was taking a look at your comment and have few questions:

  • What is the react-native-video-cache file and where is this stored?
  • What do you mean when you say this cache file was corrupted?
  • Can you please explain your root cause and fix in more detail? It would really help us debug on our side and any help on this would be greatly appreciated, thank you!

@ddiachkov
Copy link

ddiachkov commented Jan 31, 2024

@shanujshekhar

  1. react-native-video-cache is a library that we've been using to cache video files (I've switched to custom CacheDataSource caching since then).
  2. One of the files in the cache was corrupted. It made ExoPlayer fetch extra bytes and hit Cloudfront.
  3. media3 (aka ExoPlayer) has a special hack for 416 response; however, it doesn't work with Cloudfront specifically because it doesn't return a header.

TLDR; with Cloudfront you're not allowed to overfetch data — the player will always throw.

As @tonihei suggested, the issue is probably somewhere in the manifest file (ie., invalid file size).

@tonihei
Copy link
Collaborator

tonihei commented Feb 1, 2024

hi tonihei, just attached the manifest file and sent to android-media-github@google.com

Thanks, I'd need the live link to the mpd though to properly test the download. The manifest is not super helpful.

Given the other comments from @ddiachkov, I assume there might also be something wrong with the Cloudfront backend in how it responds to HTTP requests?

@tonihei
Copy link
Collaborator

tonihei commented Feb 1, 2024

In fact, I think @ddiachkov pointed to the exact issue already that Request/Headers/Range=bytes=108421975- where the position 108421975 is one byte extra than it's actual total length is exactly the case that should have been caught by the 416 response with the Content-Range header.

When you see a caching failure (like this one), then the right course of action is probably to retry the caching from scratch. I'll close this issue under the assumption that this is a misbehaving server and that the problem can be resolved by retrying the download if needed.

@tonihei tonihei closed this as completed Feb 1, 2024
@androidx androidx locked and limited conversation to collaborators Apr 2, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants