You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I'm trying to blend two videos together using MTIBlendMode.screen.
I've read similar question here: #302 and here #MetalPetal/VideoIO#26.
My code generates blended video but always in wrong dimensions. Let's say I blend two videos with different resolutions (they both are vertical). And in the blended video I can see that the main video became horizontal while overlay video is still vertical but scaled. Why that happens? How I should provide frame sizes? I also tried to set originalAsset naturalSize width and height for MultilayerCompositingFilter but no luck.
I expect my final video to be blended with overlay in the same resolution as originalAsset. (both originalAsset and overlayAsset always will be vertical).
Code:
letoverlayAsset=AVURLAsset(url:Bundle.main.url(forResource:"blendoverlay", withExtension:"mp4")!)// 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.letmixComposition=AVMutableComposition()// 2 - Create two video tracks
guard let firstTrack = mixComposition.addMutableTrack(withMediaType:.video,
preferredTrackID:Int32(kCMPersistentTrackID_Invalid))else{return}do{try firstTrack.insertTimeRange(CMTimeRangeMake(start:CMTime.zero, duration: originalAsset.duration),
of: originalAsset.tracks(withMediaType:.video)[0],
at:CMTime.zero)}catch{print("Failed to load first track")return}
guard let secondTrack = mixComposition.addMutableTrack(withMediaType:.video,
preferredTrackID:Int32(kCMPersistentTrackID_Invalid))else{return}do{try secondTrack.insertTimeRange(CMTimeRangeMake(start:CMTime.zero, duration: overlayAsset.duration),
of: overlayAsset.tracks(withMediaType:.video)[0],
at:CMTime.zero)}catch{print("Failed to load second track")return}// 3 - Blend videosletrenderContext=try!MTIContext(device:MTLCreateSystemDefaultDevice()!)
videoComposition =MTIVideoComposition(asset: mixComposition, context: renderContext, queue:DispatchQueue.main, filter:{ request inletsourceImages= request.sourceImages
guard let originalImage =sourceImages[originalAsset.tracks(withMediaType:.video)[0].trackID]else{returnMTIImage.black
}
guard var overlayImage =sourceImages[overlayAsset.tracks(withMediaType:.video)[0].trackID]else{returnMTIImage.black
}letfilter=MultilayerCompositingFilter()
filter.layers =[MultilayerCompositingFilter.Layer(content: overlayImage).frame(CGRect(x:0, y:0, width:1, height:1), layoutUnit:.fractionOfBackgroundSize).blendMode(.screen)]
filter.inputBackgroundImage = originalImage
return filter.outputImage!
})letplayerItem=AVPlayerItem(asset: mixComposition)
playerItem.videoComposition = videoComposition!.makeAVVideoComposition()self.videoComposition = videoComposition
player.replaceCurrentItem(with: playerItem)
player.play()completion(player)`
Hello. I'm trying to blend two videos together using
MTIBlendMode.screen
.I've read similar question here: #302 and here #MetalPetal/VideoIO#26.
My code generates blended video but always in wrong dimensions. Let's say I blend two videos with different resolutions (they both are vertical). And in the blended video I can see that the main video became horizontal while overlay video is still vertical but scaled. Why that happens? How I should provide frame sizes? I also tried to set
originalAsset
naturalSize width and height forMultilayerCompositingFilter
but no luck.I expect my final video to be blended with overlay in the same resolution as originalAsset. (both originalAsset and overlayAsset always will be vertical).
Code:
Checklist
The text was updated successfully, but these errors were encountered: