Adding Video on Demand #4
Comments
|
@wizage - You mentioned 3 users that you talked to can you add any specific feedback you've gotten from users on this? I'm curious about...
For reference, we have an existing VOD solution that could serve as a good starting point architectually. It doesn't do JITE, but a similar design could allow for more complex workflows: https://aws.amazon.com/solutions/video-on-demand-on-aws/ |
|
As a fullstack dev with a video production background pursuing OTT/VOD, I'm very interested in this featureset. I've already gone through the VOD workshop
This would be a huge improvement. I'd love to see some layer implemented within the lambda that watches S3 to dynamically propose job settings without any input on my end that would cover 95% of OTT customers/use case. To me, that'd be a qVBR job with DASH, HLS, and potentially MP4 outputs, with resolutions being determined based on source video metadata and/or preferred delivery devices ie.
This would introduce welcomed flexibility, but is not the primary use case for OTT in general. Typically, my end users wouldn't want any modification to their source files as all of those decisions have been made prior to uploading the video. Some of these choices might be cool to apply within a client-side library after the initial transcoding, like previewing/applying filters after the initial transcoding.
Right now I have a lambda set up that updates the related dynamodb record with playback urls when job completed, not sure if that's the approach you had in mind but it seems to work well as the transcoding jobs could take a while.
@smp I'd love to hear more about what you have in mind here.
Complex, yes. Awesome, yes. Most of the painpoints with my experience are related to determining job settings based on source media. Mux promotes Per Title Encoding which analyzes incoming videos using deep learning and determines, in seconds, the right video encoding settings. This might be moot when considering qVBR outputs, but it'd likely be worth getting some input from video encoding pros re: the dev approach on this. Some work towards thumbnail generation presets (preview strips, GIFs, overlay play icon etc.) might also be worth discussing. Looking forward to your efforts! |
|
Design docs: IngestionIngestion should be done from S3. The S3 ingestion would require the user to upload the RAW untranscoded files to S3 to trigger a lambda job to create a new MediaPackage job. The Lambda function will be only triggered off of a successful Put on S3. A delete will do the opposite and delete all content related to the RAW film (this can be turned on or off maybe). MediaConvert ConfigThe MediaConvert will be configure based on the input file provided and the specs provide in the CLI. This means that if you see a file that is only 720p it will transcode from 720->down. You can specify what your min and max transcode job is or use defaults like [Mobile, Web, 4k, etc.]. Storage of metaMeta data should be stored in a DynamoDB database (more design docs coming soon on how the table will look). The database will contain the raw file location, the transcoded file locations, available transcodings of the file (e.g. DASH, HLS) and any other info provide by the user. (We might need to have way to provide info to this table). Providing data backAccessing of the content will need to have either a secure option or a non-secure option. This means either using signed-urls for S3 or CloudFront. To return the URLs and to access other meta data including title, actors, etc. will be done through the API Layer. The API layer is still open to utilizing either AppSync or API Gateway. API Gateway (Rest)API Gateway will be designed with a simple access patterns including: GetItemMeta, GetList, GetFilm. Extra fields would have to be added manually and would require extra configuration outside of Amplify which is a huge downside of API Gateway for this model. But a huge upside is that everyone knows how Rest works. AppSync (GraphQL)AppSync will be designed with direct access to the meta data DynamoDB table. This will allow access straight to the data so you can change what your frontend needs and allows you to add more fields without extra configuration outside of Amplify. This can take in power of codegen as well for Amplify. Downside is we need to create a Lambda resolver for the signed URL portion to return back the right info, and knowledge of GraphQL is not as widely known. Upside allows users to quickly add new meta data to the table as they see fit. |
|
I was hoping to dive in and help out, but I might be a little behind on where the project sits. I ran through the |
|
@ajonp - if you add the API to your amplify video resource it will deploy an appsync API that you can use for uploads and content access URLs. This database does not store data for the content processing pipeline and isn't required for Amplify Video VOD to function (though this might change down the line). We are working on documentation in the wiki now (feel free to add anything you see fit), but if you want to see it in action, the best place to look is within our companion workshop UnicornFlix that uses Amplify Video VOD in context of an application. We welcome and feedback you have on the implementation so please let us know what you think is missing or |
|
Video on Demand is officially marked good for Release. Closing this issue. Diagrams can be found on the wiki of the final implementations: API is in beta as we wait for the Amplify CLI to add headless support for API (GraphQL) Docs are still being written and improved but core docs can be found on our wiki as we continue to improve this! |

Is your feature request related to a problem? Please describe.
Adding Video on Demand support. This will be a second section of the video plugin to add video on demand say for example, movies or recordings.
Additional context
This will be a separate section
The text was updated successfully, but these errors were encountered: