Skip to content

AmusementClub/vs-mlrt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vs-mlrt

This project provides VapourSynth ML filter runtimes for a variety of platforms:

To simplify usage, we also provide a Python wrapper vsmlrt.py for all bundled models and a unified interface to select different backends.

Please refer to the wiki for supported models & usage information.

vsov: OpenVINO-based Pure CPU & Intel GPU Runtime

OpenVINO is an AI inference runtime developed by Intel, mainly targeting x86 CPUs and Intel GPUs.

The vs-openvino plugin provides optimized pure CPU & Intel GPU runtime for some popular AI filters. Intel GPU supports Gen 8+ on Broadwell+ and the Arc series GPUs.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vsov directory for details.

vsort: ONNX Runtime-based CPU/GPU Runtime

ONNX Runtime is an AI inference runtime with many backends.

The vs-onnxruntime plugin provides optimized CPU and CUDA GPU runtime for some popular AI filters.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vsort directory for details.

vstrt: TensorRT-based GPU Runtime

TensorRT is a highly optimized AI inference runtime for NVidia GPUs. It uses benchmarking to find the optimal kernel to use for your specific GPU, and so there is an extra step to build an engine from ONNX network on the machine you are going to use the vstrt filter, and this extra step makes deploying models a little harder than the other runtimes. However, the resulting performance is also typically much much better than the CUDA backend of vsort.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vstrt directory for details.

vsncnn: NCNN-based GPU (Vulkan) Runtime

ncnn is a popular AI inference runtime. vsncnn provides a vulkan based runtime for some AI filters. It includes support for on-the-fly ONNX to ncnn native format conversion so as to provide a unified interface across all runtimes provided by this project. As it uses the device-independent Vulkan interface for GPU accelerated inference, this plugin supports all GPUs that provides Vulkan interface (NVidia, AMD, Intel integrated & discrete GPUs all provide this interface.) Another benefit is that it has a significant smaller footprint than other GPU runtimes (both vsort and vstrt CUDA backends require >1GB CUDA libraries.) The main drawback is that it's slower.

To install, download the latest release and extract them into your VS plugins directory.

Please visit the vsncnn directory for details.