demo.mp4
๐ Experience User-Interactive Comparisons: Please visit our Project Page to explore more results.
- Dec 04, 2025: This repository is created.
Real-world video restoration is plagued by complex degradations from motion coupled with dynamically varying exposureโa key challenge largely overlooked by prior works. We present FMA-Net++, a framework for joint video super-resolution and deblurring (VSRDB) that explicitly models this coupled effect.
FMA-Net++ adopts a sequence-level architecture built from Hierarchical Refinement with Bidirectional Propagation (HRBP) blocks for parallel, long-range temporal modeling. It incorporates an Exposure Time-aware Modulation (ETM) layer and an exposure-aware Flow-Guided Dynamic Filtering (FGDF) module to infer physically grounded degradation kernels. Extensive experiments on our proposed REDS-ME and REDS-RE benchmarks demonstrate that FMA-Net++ achieves state-of-the-art performance.
FMA-Net++ utilizes HRBP blocks for efficient temporal modeling and ETM layers to explicitly handle dynamic exposure changes.
The full code and pretrained models will be released soon.
- Inference code
- Pretrained models
- Training scripts
- Dataset generation scripts
For any questions, please contact rmsgurkjg@kaist.ac.kr via email.

