-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Idea: Content preview #721
Comments
Sounds really cool but do we really want to do all things like this client-side in the long term? I mean extraction of images, possibly transcoding and other stuff. I think this can get complicated quite quickly. Do we have any long-term plans to support different resolutions per video so the experience can be optimized? I would expect us to start introducing some features like that on the storage provider side, some kind of processing of assets, otherwise we can end up with a lot of unnecessary uploads from the client |
|
We need to articulate the exact dod for a spike for this issue. |
Its useful to allow people to quicky sample the content of videos by seeing image snapshots from different time slices. This is used in at least two ways on YouTube
Implementation
For this to be possible for us, these images would have to be generated client side doing some sort of basic frame extraction, and then scaling down and compressing a collection of images that are put on the storage infrastructure as one blob. One should probably separate the images used for 1) and 2) above in separate data objects, so that apps can independently fetch the subset they need.
Before doing any design work here, I think we need to do a proof-of-concept demo which shows how feasible it is to extract and scale and compress however many images we would need for a max quality & duration video. Once we know this, we will have good information for how to do the designs on the consumer and studio side.
The text was updated successfully, but these errors were encountered: