Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Video on Demand #4

Closed
wizage opened this issue Mar 8, 2019 · 7 comments
Closed

Adding Video on Demand #4

wizage opened this issue Mar 8, 2019 · 7 comments

Comments

@wizage
Copy link
Contributor

@wizage wizage commented Mar 8, 2019

Is your feature request related to a problem? Please describe.
Adding Video on Demand support. This will be a second section of the video plugin to add video on demand say for example, movies or recordings.

Additional context
This will be a separate section

@wizage wizage changed the title Adding Video of Demand Adding Video on Demand Mar 8, 2019
@wizage wizage pinned this issue Mar 8, 2019
@smp
Copy link
Contributor

@smp smp commented Apr 11, 2019

@wizage - You mentioned 3 users that you talked to can you add any specific feedback you've gotten from users on this? I'm curious about...

  • dynamic encoding profile selection i.e. if a user uploads a SD file, you wouldn't want to encode/upscale it to HD. You would want the encoding optimized based on input info.

  • dynamic workflow configuration i.e. maybe i want to add a custom pre-process or post-process step to the file transcoding process. Custom filters, metadata twiddling, qc tools, image extractors, other stuff that mediaconvert doesn't have natively.

  • API Design What API experience would developers want? We could do something super basic where we return playback urls for encoded content as they become available and/or we could have a subscription feed of content and/or we could provide playback URLs to content that hasn't even been encoded yet and dynamically process the content based on requesting client. Commonly referred to as just-in-time-encoding (JITE), this would be complex, but innovative, and has been implemented by services like mux because it's a great experience for users and it lowers backend costs.

For reference, we have an existing VOD solution that could serve as a good starting point architectually. It doesn't do JITE, but a similar design could allow for more complex workflows: https://aws.amazon.com/solutions/video-on-demand-on-aws/

@davekiss
Copy link

@davekiss davekiss commented Apr 12, 2019

As a fullstack dev with a video production background pursuing OTT/VOD, I'm very interested in this featureset. I've already gone through the VOD workshop (S3->MediaConvert->CloudWatch->Lambda->DynamoDB) and have everything set up and working as expected. Here's some input from my perspective:

dynamic encoding profile selection i.e. if a user uploads a SD file, you wouldn't want to encode/upscale it to HD. You would want the encoding optimized based on input info.

This would be a huge improvement. I'd love to see some layer implemented within the lambda that watches S3 to dynamically propose job settings without any input on my end that would cover 95% of OTT customers/use case. To me, that'd be a qVBR job with DASH, HLS, and potentially MP4 outputs, with resolutions being determined based on source video metadata and/or preferred delivery devices ie. const outputs = ['phone', 'tablet', 'laptop', 'desktop', 'tv']

dynamic workflow configuration i.e. maybe i want to add a custom pre-process or post-process step to the file transcoding process. Custom filters, metadata twiddling, qc tools, image extractors, other stuff that mediaconvert doesn't have natively.

This would introduce welcomed flexibility, but is not the primary use case for OTT in general. Typically, my end users wouldn't want any modification to their source files as all of those decisions have been made prior to uploading the video. Some of these choices might be cool to apply within a client-side library after the initial transcoding, like previewing/applying filters after the initial transcoding.

API Design What API experience would developers want? We could do something super basic where we return playback urls for encoded content as they become available

Right now I have a lambda set up that updates the related dynamodb record with playback urls when job completed, not sure if that's the approach you had in mind but it seems to work well as the transcoding jobs could take a while.

we could have a subscription feed of content

@smp I'd love to hear more about what you have in mind here.

we could provide playback URLs to content that hasn't even been encoded yet and dynamically process the content based on requesting client. Commonly referred to as just-in-time-encoding (JITE), this would be complex, but innovative, and has been implemented by services like mux because it's a great experience for users and it lowers backend costs.

Complex, yes. Awesome, yes.

Most of the painpoints with my experience are related to determining job settings based on source media. Mux promotes Per Title Encoding which analyzes incoming videos using deep learning and determines, in seconds, the right video encoding settings. This might be moot when considering qVBR outputs, but it'd likely be worth getting some input from video encoding pros re: the dev approach on this.

Some work towards thumbnail generation presets (preview strips, GIFs, overlay play icon etc.) might also be worth discussing.

Looking forward to your efforts!

@wizage
Copy link
Contributor Author

@wizage wizage commented Apr 29, 2019

Design docs:

Ingestion

Ingestion should be done from S3. The S3 ingestion would require the user to upload the RAW untranscoded files to S3 to trigger a lambda job to create a new MediaPackage job. The Lambda function will be only triggered off of a successful Put on S3. A delete will do the opposite and delete all content related to the RAW film (this can be turned on or off maybe).

MediaConvert Config

The MediaConvert will be configure based on the input file provided and the specs provide in the CLI. This means that if you see a file that is only 720p it will transcode from 720->down. You can specify what your min and max transcode job is or use defaults like [Mobile, Web, 4k, etc.].

Storage of meta

Meta data should be stored in a DynamoDB database (more design docs coming soon on how the table will look). The database will contain the raw file location, the transcoded file locations, available transcodings of the file (e.g. DASH, HLS) and any other info provide by the user. (We might need to have way to provide info to this table).

Providing data back

Accessing of the content will need to have either a secure option or a non-secure option. This means either using signed-urls for S3 or CloudFront. To return the URLs and to access other meta data including title, actors, etc. will be done through the API Layer. The API layer is still open to utilizing either AppSync or API Gateway.

API Gateway (Rest)

API Gateway will be designed with a simple access patterns including: GetItemMeta, GetList, GetFilm. Extra fields would have to be added manually and would require extra configuration outside of Amplify which is a huge downside of API Gateway for this model. But a huge upside is that everyone knows how Rest works.

AppSync (GraphQL)

AppSync will be designed with direct access to the meta data DynamoDB table. This will allow access straight to the data so you can change what your frontend needs and allows you to add more fields without extra configuration outside of Amplify. This can take in power of codegen as well for Amplify. Downside is we need to create a Lambda resolver for the signed URL portion to return back the right info, and knowledge of GraphQL is not as widely known. Upside allows users to quickly add new meta data to the table as they see fit.

Attached is the first draft of the proposed architecture.
IMG_3072

@smp
Copy link
Contributor

@smp smp commented Jul 29, 2019

Meeting 7/29

  • Creating GraphQL "Video Transformer" instead of a GraphQL model
  • @wizage working on cfn implementation
  • @axptwig working on encoding lambda function
  • @smp updage diagram doc (sync with sam at tech summit)
@ajonp
Copy link

@ajonp ajonp commented May 3, 2020

I was hoping to dive in and help out, but I might be a little behind on where the project sits. I ran through the amplify video add for VOD. I uploaded to the input S3, it converts and then drops off to the output S3. So is there still work to be done on the upload of metadata back to Dynamo?

@smp
Copy link
Contributor

@smp smp commented May 5, 2020

@ajonp - if you add the API to your amplify video resource it will deploy an appsync API that you can use for uploads and content access URLs. This database does not store data for the content processing pipeline and isn't required for Amplify Video VOD to function (though this might change down the line).

We are working on documentation in the wiki now (feel free to add anything you see fit), but if you want to see it in action, the best place to look is within our companion workshop UnicornFlix that uses Amplify Video VOD in context of an application.

We welcome and feedback you have on the implementation so please let us know what you think is missing or

@wizage
Copy link
Contributor Author

@wizage wizage commented Jun 30, 2020

Video on Demand is officially marked good for Release. Closing this issue.

Diagrams can be found on the wiki of the final implementations:
https://github.com/awslabs/amplify-video/wiki/VOD-Concepts

API is in beta as we wait for the Amplify CLI to add headless support for API (GraphQL)

Docs are still being written and improved but core docs can be found on our wiki as we continue to improve this!

@wizage wizage closed this Jun 30, 2020
Project planning board automation moved this from In progress to Done Jun 30, 2020
@wizage wizage unpinned this issue Jun 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants