Home
WolframRhodium edited this page May 1, 2024
·
37 revisions
Welcome to the vs-mlrt wiki!
The goal of the project to provide highly-optimized AI inference runtime for VapourSynth.
- vs-ov: OpenVINO Pure CPU AI Inference Runtime
- vs-ort: ONNX Runtime based CPU/CUDA AI Inference Runtime
- vs-trt: TensorRT based CUDA AI Inference Runtime
The following models are available:
- waifu2x: anime super-resolution / upscaling / denoising
- DPIR: denoise/deblocking
- RealESRGANv2: anime super-resolution / upscaling
- Real-CUGAN: anime super-resolution / upscaling / denoising
- RIFE: video frame interpolation
- Runtimes
- Models
- Device-specific benchmarks