Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Idea: Content preview #721

Open
bedeho opened this issue Jun 7, 2021 · 3 comments
Open

Feature Idea: Content preview #721

bedeho opened this issue Jun 7, 2021 · 3 comments

Comments

@bedeho
Copy link
Member

bedeho commented Jun 7, 2021

Its useful to allow people to quicky sample the content of videos by seeing image snapshots from different time slices. This is used in at least two ways on YouTube

  1. when hovering a video which is listed you see a short loop of images
  2. when playing a video, you can hover on the progress bar you get stills from those points

Implementation

For this to be possible for us, these images would have to be generated client side doing some sort of basic frame extraction, and then scaling down and compressing a collection of images that are put on the storage infrastructure as one blob. One should probably separate the images used for 1) and 2) above in separate data objects, so that apps can independently fetch the subset they need.

Before doing any design work here, I think we need to do a proof-of-concept demo which shows how feasible it is to extract and scale and compress however many images we would need for a max quality & duration video. Once we know this, we will have good information for how to do the designs on the consumer and studio side.

@kdembler
Copy link
Member

kdembler commented Jun 7, 2021

Sounds really cool but do we really want to do all things like this client-side in the long term? I mean extraction of images, possibly transcoding and other stuff. I think this can get complicated quite quickly. Do we have any long-term plans to support different resolutions per video so the experience can be optimized? I would expect us to start introducing some features like that on the storage provider side, some kind of processing of assets, otherwise we can end up with a lot of unnecessary uploads from the client

@bedeho
Copy link
Member Author

bedeho commented Jun 8, 2021

  • I am not sure client side transcoding really could work, would need to see a proof-of-concept to believe that this could be practical.
  • Image extraction really seems very feasible processing wise, yes it would be non-trivial to integrate in the code base, but also not very bad, because it is quite independent of everything else. I dont even think it would imply any UI redesign of Studio at all, the processing time could just be baked into the current hashing progress indicator.
  • I think the near term way to handle multiple resolutions would just be to support uploading multiple assets, and then requiring the user to have done that in advance. With that, we could at least take advantage of this feature on the consumer side, and we could automatically deal with it in the YT-synch feature we are planning.
  • Fully server side transcoding seems out of scope as is. Likewise, things like subtitles I think would also be need to be provided by the user, we cant really aim to do that automatically as well. So I think we will have to live with a thicker client side for mainnet.

@dmtrjsg
Copy link

dmtrjsg commented Jul 1, 2022

We need to articulate the exact dod for a spike for this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants