znedi3 is a CPU-optimized version of nnedi.
nnedi3 is an intra-field only deinterlacer. It takes a frame, throws away one field, and then interpolates the missing pixels using only information from the remaining field. It is also good for enlarging images by powers of two.
This plugin no longer provides the nnedi3_rpow2 filter. A replacement can be found here: http://forum.doom9.org/showthread.php?t=172652
This is a port of tritical's nnedi3 filter.
nnedi3_weights.bin is required. In Windows, it must be
located in the same folder as
libnnedi3.dll. Everywhere else it
can be located either in the same folder as
libnnedi3.dylib, or in
nnedi3.nnedi3(clip clip, int field[, bint dh=False, int planes=[0, 1, 2], int nsize=6, int nns=1, int qual=1, int etype=0, int pscrn=2, bint opt=True, bint int16_prescreener=True, bint int16_predictor=True, int exp=0, bint show_mask=False])
- Clip to process. It must have constant format and dimensions, and integer samples with 8..16 bits or float samples with 32 bits.
Selects the mode of operation. Possible values:
- 0: Same rate, keep bottom field.
- 1: Same rate, keep top field.
- 2: Double rate, start with bottom field.
- 3: Double rate, start with top field.
If dh is True, the
_Fieldframe property is used to determine each frame's field dominance. The field parameter is only a fallback for frames that don't have the
If dh is False, the
_FieldBasedframe property is used to determine each frame's field dominance. The field parameter is only a fallback for frames that don't have the
_FieldBasedproperty, or where said property indicates that the frame is progressive.
Doubles the height, keeping both fields. If field is 0, the input is copied to the odd lines of the output (the bottom field). If field is 1, the input is copied to the even lines of the output (the top field).
If dh is True, field must be 0 or 1.
Planes to process. Planes that are not processed will contain uninitialised memory.
Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings:
- 0: 8x6
- 1: 16x6
- 2: 32x6
- 3: 48x6
- 4: 8x4
- 5: 16x4
- 6: 32x4
For image enlargement it is recommended to use 0 or 4. A taller neighbourhood will result in sharper output.
For deinterlacing a wider neighbourhood will allow connecting lines of smaller slope. However, the setting to use depends on the amount of aliasing (lost information) in the source. If the source was heavily low-pass filtered before interlacing then aliasing will be low and a wide neighbourhood won't be needed, and vice-versa.
Number of neurons in the predictor neural network. Possible values:
- 0: 16
- 1: 32
- 2: 64
- 3: 128
- 4: 256
Higher values are slower, but provide better quality. However, quality differences are usually small. The difference in speed will become larger if qual is increased.
The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.
A value of 2 is recommended for image enlargement.
The set of weights used in the predictor neural network. Possible values:
- 0: Weights trained to minimise absolute error.
- 1: Weights trained to minimise squared error.
The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values:
- 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow.
- 1: Old prescreener.
- 2: New prescreener level 0.
- 3: New prescreener level 1.
- 4: New prescreener level 2.
The new prescreener works faster than the old one, and it also causes more pixels to be processed with cubic interpolation. The higher levels cause a bit more pixels to be processed with the predictor neural network, therefore they are slower than the lowest level.
The new prescreener is not available with float input.
Default: 2 for integer input, 1 for float input.
If True, the best optimised functions supported by the CPU will be used. If False, only scalar functions will be used.
If True, the prescreener will perform the dot product calculations using 16 bit integers. Otherwise, it will use single precision floats.
This parameter is ignored when the input has float samples.
If True, the predictor will perform the dot product calculations using 16 bit integers. Otherwise, it will use single precision floats.
This parameter is ignored when the input has more than 15 bits per sample.
The exp function approximation to use in the predictor. 0 is the fastest and least accurate. 2 is the slowest and most accurate.
If True, the pixels that would be processed with the predictor neural network are instead set to white.
Clone the repository (using the
--recursive argument to also dowload the required
vsxx library as a submodule):
git clone --recursive https://github.com/sekrit-twc/znedi3
Compile the library:
cd znedi3 make X86=1
To install, copy
nnedi3_weights.bin to the vapoursynth plugin folder (usually
sudo cp nnedi3_weights.bin vsznedi3.so /usr/lib/x86_64-linux-gnu/vapoursynth/
There is also a test application which can be built to check the efficiency of the plugin kernels optimized for different SIMD instructions:
make X86=1 testapp/testapp