Skip to content
WolframRhodium edited this page May 1, 2024 · 37 revisions

Welcome to the vs-mlrt wiki!

The goal of the project to provide highly-optimized AI inference runtime for VapourSynth.

Runtimes

  • vs-ov: OpenVINO Pure CPU AI Inference Runtime
  • vs-ort: ONNX Runtime based CPU/CUDA AI Inference Runtime
  • vs-trt: TensorRT based CUDA AI Inference Runtime

Models

The following models are available:

  • waifu2x: anime super-resolution / upscaling / denoising
  • DPIR: denoise/deblocking
  • RealESRGANv2: anime super-resolution / upscaling
  • Real-CUGAN: anime super-resolution / upscaling / denoising
  • RIFE: video frame interpolation

Device-specific benchmarks