Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,11 @@ In this example, you use AWS Step Functions to orchestrate restoration of S3 obj

Blog Post: [Orchestrating S3 Glacier Deep Archive object retrieval using Step Functions](Blog Link Here)

### Video Segment Detection and Edition with using AWS Step Functions
This workflow is meant to show you how to leverage AWS Step Functions for performing typical video edition tasks. Specifically, the example uses a video that has [SMPTE color bars](https://en.wikipedia.org/wiki/SMPTE_color_bars) of random duration at the beginning. The workflow will get a demo video from S3, put it through Amazon Rekognition for detecting segments, and then Amazon MediaConvert removes the initial video segment (SMPTE color bars). You can find details in the example's [README](./sam/app-video-segment-detection-and-edition/README.md) file.

Blog Post: [Low code workflows with AWS Elemental MediaConvert](https://aws.amazon.com/blogs/media/low-code-workflows-with-aws-elemental-mediaconvert/)

## Demos of Step Functions capabilities

### Demo Step Functions Local testing with Mock service integrations using Java testing frameworks (JUnit and Spock)
Expand Down
102 changes: 102 additions & 0 deletions sam/app-video-segment-detection-and-edition/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Video Segment Detection and Edition

This workflow automates the video edition process by leveraging Amazon Rekognition for detecting segments and Amazon MediaConvert for removing segments.

Learn more about this workflow at Step Functions workflows collection: << Add the live URL here >>

Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example.

## Requirements

* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources.
* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* [AWS Serverless Application Model](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html) (AWS SAM) installed

## Deployment Instructions

1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository:
```
git clone https://github.com/aws-samples/step-functions-workflows-collection
```
1. Change directory to the pattern directory:
```
cd step-functions-workflows-collection/segment-detection-workflow
```
1. From the command line, use AWS SAM to build and deploy the AWS resources for the workflow as specified in the template.yaml file:
```
sam build
sam deploy --guided --capabilities CAPABILITY_NAMED_IAM
```
1. During the prompts:
* Enter a stack name
* Enter the desired AWS Region (some services may only be available in **us-east-1**)
* Allow SAM CLI to create IAM roles with the required permissions.

Once you have run `sam deploy --guided` mode once and saved arguments to a configuration file (samconfig.toml), you can use `sam deploy` in future to use these defaults.

1. Note the outputs from the SAM deployment process. These contain the resource names and/or ARNs which are used for testing.

## How it works

This workflow automates the video edition process by leveraging Amazon Rekognition for detecting segments, and Amazon MediaConvert for removing segments. As input you will use a video file with SMPTE color bars at the beginning, and as output you will get a transcoded clean video file without SMPTE color bars.
For the input you will use a video created for the demo. Once you deploy via SAM, note the S3 bucket where you have to upload the video. The output S3 bucket should be the same.

Workflow starts by calling Amazon Rekognition for Segment Detection. The time taken for the Segment Detection Job will vary depending on the length of the video, so the workflow implements loop with Wait and Choice states. Between the Wait and Choice states, and every 60 seconds, the workflow calls the GetSegmentDetection API of Amazon Rekognition to confirm the status of the job. If the job didn't finish, the workflow stays in the loop. If the job finished, the workflow passes the segments to MediaConvert for processing.

Finally, MediaCovert received the segments as input (one of them is the SMPTE color bars at the beginning of the video), and the S3 video, to then triggering the transcoding job. During the transcoding job, MediaConvert removes the SMPTE color bars. MediaConvert CreateJob is a synchronous call following the [Run a Job (.sync)](https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-sync) pattern in AWS Step Functions, which means you don't need to implement any loop to get the job status (as we did with Amazon Rekognition); Step Functions handles that with MediaConvert for you.

Note: The loop implementation for Amazon Rekognition Segment Detection Job, can be implemented differently with custom code and Lambda functions, but for simplification we use simple Wait and Choice states to maintain all low-code. Additionally, in a production environment, you should also handle failures from the Segment Detection Job.

## State Machine

<div style="text-align:center"><img src="./resources/statemachine.png" /></div>

## Testing

Once your have deployed the state machine, you should trigger it and see it in action. For that:

1. Go to the Amazon S3 console in you AWS Account.

1. Identify the S3 bucket created via SAM (look at the SAM output), and upload the video stored in the folder *./resources* of your cloned repo.

1. Go to the AWS Step Functions console in you AWS Account.

1. Identify the State Machine. You should see a state machine with a name that matches to the ARN in the output of the sam deploy command.

1. Click on the state machine name and then click on **Start Execution** in the top-right of your screen.

1. In the following window, introduce the input of the state machine. Substitute the *bucket name* (don't use s3://). It should look like the below json.

```json
{
"Input": {
"Bucket": "{S3 BUCKET NAME CREATED VIA SAM}",
"Key": "source_video.mp4"
},
"Output": {
"Bucket": "{S3 BUCKET NAME CREATED VIA SAM}",
"Key": "output/"
}
}
```

1. Click on **Start Execution**

1. It will take few minutes, but once the workflow finished, you should see an image like the below.

<div style="text-align:center"><img src="./resources/workflow.png" /></div>

1. At this point you can go to the output S3 bucket to watch the video edited, without the SMPTE color bars at the beginning.

## Cleanup

1. Empty the S3 bucket created via SAM
1. Delete the stack
```bash
sam delete
```

----
Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved.

SPDX-License-Identifier: MIT-0
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
{
"Comment": "A description of my state machine",
"StartAt": "StartSegmentDetection",
"States": {
"StartSegmentDetection": {
"Type": "Task",
"Parameters": {
"Filters": {
"ShotFilter": {
"MinSegmentConfidence": 95
}
},
"SegmentTypes": [
"TECHNICAL_CUE"
],
"Video": {
"S3Object": {
"Bucket.$": "$.Input.Bucket",
"Name.$": "$.Input.Key"
}
}
},
"Resource": "arn:aws:states:::aws-sdk:rekognition:startSegmentDetection",
"Next": "Wait",
"ResultPath": "$.SegmentJob"
},
"Wait": {
"Type": "Wait",
"Seconds": 60,
"Next": "GetSegmentDetection"
},
"GetSegmentDetection": {
"Type": "Task",
"Parameters": {
"JobId.$": "$.SegmentJob.JobId"
},
"Resource": "arn:aws:states:::aws-sdk:rekognition:getSegmentDetection",
"Next": "Choice",
"ResultPath": "$.Segments"
},
"Choice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.Segments.JobStatus",
"StringEquals": "SUCCEEDED",
"Next": "MediaConvert CreateJob"
}
],
"Default": "Wait"
},
"MediaConvert CreateJob": {
"Type": "Task",
"Resource": "arn:aws:states:::mediaconvert:createJob.sync",
"Parameters": {
"Queue": "arn:aws:mediaconvert:${REGION}:${AWS_ACCOUNT_ID}:queues/Default",
"UserMetadata": {},
"Role": "${MEDIACONVERT_ROLE}",
"Settings": {
"TimecodeConfig": {
"Source": "ZEROBASED"
},
"OutputGroups": [
{
"Name": "Apple HLS",
"Outputs": [
{
"Preset": "System-Ott_Hls_Ts_Avc_Aac_16x9_1280x720p_30Hz_5.0Mbps",
"NameModifier": "test"
}
],
"OutputGroupSettings": {
"Type": "HLS_GROUP_SETTINGS",
"HlsGroupSettings": {
"SegmentLength": 10,
"Destination.$": "States.Format('s3://{}/{}', $.Output.Bucket, $.Output.Key)",
"MinSegmentLength": 0
}
}
}
],
"FollowSource": 1,
"Inputs": [
{
"InputClippings": [
{
"StartTimecode.$": "$.Segments.Segments[0].EndTimecodeSMPTE"
}
],
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {},
"TimecodeSource": "ZEROBASED",
"FileInput.$": "States.Format('s3://{}/{}', $.Input.Bucket, $.Input.Key)"
}
]
},
"BillingTagsSource": "JOB",
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_60",
"Priority": 0
},
"End": true
}
}
}
82 changes: 82 additions & 0 deletions sam/app-video-segment-detection-and-edition/template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Step Functions Workflow - description

Resources:
MySampleStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
DefinitionUri: statemachine/statemachine.asl.json
DefinitionSubstitutions:
AWS_ACCOUNT_ID: !Ref AWS::AccountId
MEDIACONVERT_ROLE : !GetAtt MediaConvertRole.Arn
REGION: !Ref AWS::Region
Policies:
- Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: [
"rekognition:GetSegmentDetection",
"rekognition:StartSegmentDetection",
"mediaconvert:CreateJob",
"mediaconvert:GetJob",
"events:PutTargets",
"events:PutRule",
"events:DescribeRule"
]
Resource: "*"
- Effect: "Allow"
Action: [
"iam:PassRole"
]
Resource: "*"
Condition:
StringLike:
"iam:PassedToService": "mediaconvert.amazonaws.com"
- S3ReadPolicy:
BucketName: !Ref S3Bucket


S3Bucket:
Type: 'AWS::S3::Bucket'

MediaConvertRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- mediaconvert.amazonaws.com
Action:
- 'sts:AssumeRole'
Policies:
- PolicyName: "mediaconvert_default"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: [
"s3:Get*",
"s3:List*",
"s3:Put*"
]
Resource:
- !GetAtt S3Bucket.Arn
- !Join
- ''
- - !GetAtt S3Bucket.Arn
- '/*'



Outputs:
MySegementDetectionStateMachineArn:
Description: "MySampleStateMachine ARN"
Value: !Ref MySampleStateMachine
S3Bucket:
Description: "Video input/output S3 bucket"
Value: !Ref S3Bucket