Skip to content

KAIST-VICLab/FMA-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FMA-Net (CVPR 2024 Oral)

Co-corresponding authors
1Korea Advanced Institute of Science and Technology, South Korea
2Chung-Ang University, South Korea

This repository is the official PyTorch implementation of "FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring".

📧 News

  • Apr 19, 2024: Codes of FMA-Net (including the training, testing code, and pretrained model) are released 🔥
  • Apr 05, 2024: FMA-Net is selected for an ORAL presentation at CVPR 2024 (0.78% of 11,532 valid submissions)
  • Feb 27, 2024: FMA-Net accepted to CVPR 2024 🎉
  • Jan 14, 2024: This repository is created

📝 TODO

  • Release FMA-Net code
  • Release pretrained FMA-Net model
  • Add data preprocessing scripts

Reference

If you find FMA-Net useful, please consider citing:

@inproceedings{youk2024fmanet,
  author    = {Geunhyuk Youk and Jihyong Oh and Munchurl Kim},
  title     = {FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring},
  booktitle = {CVPR},
  year      = {2024},
 }

Contents

Requirements

  • Python 3.9, PyTorch >= 1.9.1
  • Platforms: Ubuntu 22.04, cuda 11.8

Data Preprocessing

  • Download REDS dataset
  • Generate REDS4: run ./preprocessing/generate_reds4.py
  • Generate RAFT pseudo-GT optical flow: run ./preprocessing/generage_flow.py (or download the optical flow from here)

Pretrained Model

Pre-trained model can be downloaded from here.

  • FMA-Net_REDS.zip: trained on REDS dataset.

Training

# download code
git clone https://github.com/KAIST-VICLab/FMA-Net
cd FMA-Net

# train FMA-Net on REDS dataset
python main.py --train --config_path experiment.cfg

Testing

# test FMA-Net on REDS dataset
python main.py --test --config_path experiment.cfg

# test on your own datasets
python main.py --test_custom --config_path experiment.cfg

Results

Please visit our project page and demo video for diverse visual results.

License

The source codes including the checkpoint can be freely used for research and education only. Any commercial use should get formal permission from the principal investigator (Prof. Munchurl Kim, mkimee@kaist.ac.kr).

Acknowledgement

This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT): No. 2021-0-00087, Development of high-quality conversion technology for SD/HD low-quality media and No. RS2022-00144444, Deep Learning Based Visual Representational Learning and Rendering of Static and Dynamic Scenes.

About

[CVPR 2024, Oral 0.78%] Official repository of FMA-Net

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages