Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support loading frame image from video #25

Closed
MarkMjw opened this issue Aug 14, 2019 · 11 comments · Fixed by #122
Closed

Support loading frame image from video #25

MarkMjw opened this issue Aug 14, 2019 · 11 comments · Fixed by #122
Labels
enhancement New feature or request help wanted Issues that are up for grabs + are good candidates for community PRs

Comments

@MarkMjw
Copy link

MarkMjw commented Aug 14, 2019

Is your feature request related to a problem? Please describe.

    1. The album list has pictures and videos, but the load(file) method cannot load the video cover.
    1. I have a need to show video sequence frames, but there is no better way now.

Describe the solution you'd like
Glide is very compatible with image and video files. Loading video sequence frames is also very convenient,such as:

Glide.with(frameView).load(videoPath).apply(RequestOptions.frameOf(video.time * 1000).centerCrop()).into(frameView)
@MarkMjw MarkMjw added the enhancement New feature or request label Aug 14, 2019
@colinrtwhite
Copy link
Member

colinrtwhite commented Aug 14, 2019

This is a good candidate for a custom Decoder. However, we'll need to augment the Options class to pass in generic keys to the image pipeline (so we can support a video timestamp).

I probably wouldn't add this to the base artifact, but it might make sense as an extension library.

@colinrtwhite
Copy link
Member

colinrtwhite commented Sep 19, 2019

Thinking about this more, this would be a good candidate for a coil-video artifact.

I think a custom Decoder that uses MediaMetadataRetriever to retrieve the frame data. We'll need to write a custom MediaDataSource that's backed by an Okio BufferedSource as well.

Going to tag this as help wanted in case someone wants to take a stab at it. Otherwise, I can work on this once some of the other bugs are fixed.

@colinrtwhite colinrtwhite added the help wanted Issues that are up for grabs + are good candidates for community PRs label Sep 19, 2019
@Hospes
Copy link

Hospes commented Dec 14, 2019

Hi there, any chance of having it in near future ?

@colinrtwhite
Copy link
Member

colinrtwhite commented Dec 14, 2019

@Hospes Unfortunately custom frame positions aren't likely to be supported soon. I'd like to add support for this, however MediaMetadataRetriever's APIs don't work well with Coil's design. That said, Coil is able to display the first frame of a video by default.

Here's why it's not easy to support custom frame positions currently:

  • Coil is designed to separate the source of the data from the decoding behaviour by using BufferedSource as an intermediate representation. This makes it easy to add support for custom image formats that work transparently across all sources (network, disk, file, uri, etc.).
  • MediaMetadataRetriever don't have an API to support custom sources until API 23. This means we would have create a temp file on disk to read the file, which is prohibitive for large video files.
  • The MediaMetadataRetriever API 23 API that supports custom sources requires the ability to read at random locations in the data source. I.e. they need to be able to read the end of the source and then read the beginning of the source after. This isn't possible with BufferedSource (or InputStream), which requires you to consume the source as you read it. We could buffer a small amount of data into memory to read back later, but for this to work 100% of the time we'd have to buffer the whole video file into memory which isn't possible for large video files.

I've got some partial work trying to implement a solution here: #122

If anyone has an idea of how to work around these limitations without changing the Decoder API to have explicit knowledge of the underlying File please let me know (or submit a PR)!

@mario
Copy link
Contributor

mario commented Jan 1, 2020

What about using NanoHttpd to serve the input stream over on APIs lower than 23? That alows seeks AFAIK.

@colinrtwhite
Copy link
Member

@mario How does it allow seeking? On pre-23, MediaMetadataRetriever requires a File or a Uri and can't accept an InputStream.

@mario
Copy link
Contributor

mario commented Jan 2, 2020

@colinrtwhite you serve the input stream via local http server and access it through ranges as a regular Uri.

@colinrtwhite
Copy link
Member

@mario Would the code to serve the input stream go in the Fetcher or the Decoder? How would you access it through ranges in a Decoder? Sorry if I'm misunderstanding.

@mario
Copy link
Contributor

mario commented Jan 5, 2020

@colinrtwhite I am of the opinion that in should definitely go in Decoder as Fetcher doesn't need knowledge of this. As for how I'd access it in ranges - MediaMetadataRetriever does this automatically by requesting 65k bytes while iterating through the stream - it's the job of the http server (powered by nanoHTTPD or even better AndroidAsync) to read the Range header and respond with the appropriate chunk.

@colinrtwhite
Copy link
Member

MediaMetadataRetriever doesn't iterate through the source in increasing order, though. It will jump to the end of the video then back to the beginning while reading. For AndroidAsync or a similar server to respond it'll have to buffer the input BufferedSource into memory, which we want to avoid.

@colinrtwhite
Copy link
Member

colinrtwhite commented Feb 7, 2020

This is now supported in the latest snapshot. Documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Issues that are up for grabs + are good candidates for community PRs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants