This repository has been archived by the owner on May 11, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 3
Video Processing
AlphaAtlas edited this page Oct 21, 2019
·
5 revisions
Scripted neural network parameters aside, ProcessVideoAuto.vpy
works just like any other VapourSynth script. However, there are a few quirks related to videos and the super resolution filters:
-
Some of the pretrained mxnet models are native RGB models, and some are YUV.
super_resolution()
will convert your video to 32 bit RGB/YUV, which means you might have to split the planes out with 'ShufflePlanes' and/or convert it back with something likemvs.ToYUV(clip, depth = 10)
. -
In general, artifact removal should be done before running the super resolution functions, while stuff like sharpening, line thickening and so on can be done before or after.
-
mvs.Preview()
is a great way to flip between before/after results. Typically it would look like this:
origclip = clip
#do stuff to clip
out = mvs.Preview([clip, origclip], bits = 10)
out.set_output()
- You can blend different super_resolution models together:
sr1 = super_resolution(clip)
sr2 = VSGAN.Start(clip)
sr1 = mvs.ToYUV(sr1, bits = 16)
sr2 = mvs.ToYUV(sr2, bits = 16)
clip = core.std.Merge(sr1, sr2, weight = 0.5)