Skip to content

This is official Pytorch implementation of "[CVPR 2026] VideoFusion: A Spatio-Temporal Collaborative Network for Multi-modal Video Fusion"

Notifications You must be signed in to change notification settings

Linfeng-Tang/VideoFusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

33 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Official implementation of "VideoFusion: A Spatio-Temporal Collaborative Network for Multi-modal Video Fusion"

Paper arXiv Project Dataset

Linfeng Tang, Yeda Wang, Meiqi Gong, Zizhuo Li, Yuxin Deng, Xunpeng Yi, Chunyu Li, Hao Zhang, Han Xu, Jiayi Ma


πŸ”₯ News

  • [2026] VideoFusion has been accepted to CVPR 2026.
  • [2025] We release M3SVD, a large-scale aligned infrared-visible multi-modal video dataset for fusion & restoration.

πŸ”Ž Motivation

Most multi-modal fusion methods are designed for static images. Applying them frame-by-frame to videos often leads to:

  • Temporal flickering (inconsistent fusion across frames)
  • Under-utilization of motion/temporal cues

🧠 Architecture

The overall framework of our spatio-temporal collaborative video fusion network.


πŸ“¦ M3SVD Dataset

  • 220 temporally synchronized & spatially registered IR-VI videos
  • 153,797 frames total
  • Registered resolution 640Γ—480, 30 FPS
  • Diverse conditions: daytime / nighttime / challenging scenarios (e.g., occlusion, disguise, low illumination, overexposure)

Data Processing Workflow

Dataset Comparison (vs. prior works)

πŸ“Œ Place dataset files following the dataloader requirement (see Dataset Preparation section).
πŸ”— Download links will be updated: (TBD)


βš™οΈ Installation

1) Clone

git clone git@github.com:Linfeng-Tang/VideoFusion.git
cd VideoFusion

2) Create Environment

conda create -n videofusion python=3.9 -y
conda activate videofusion
pip install -r requirements.txt

πŸš€ Quick Start (Testing)

Prepare

  1. Download pretrained weights: (TBD)
  2. Put weights into:
./pretrained_weights/

Run

python test.py -opt=./options/test/test_VideoFusion.yml

πŸš‚ Training

1) Dataset Preparation

Download M3SVD and place it as:

<your_m3svd_root>/
  β”œβ”€β”€ train/
  β”‚   β”œβ”€β”€ ir/seqxxx/*.png
  β”‚   └── vi/seqxxx/*.png
  β”œβ”€β”€ val/
  β”‚   β”œβ”€β”€ ir/...
  β”‚   └── vi/...
  └── test/
      β”œβ”€β”€ ir/...
      └── vi/...

Then update options/train/train_VideoFusion.yml with the correct dataset root paths.

2) DDP Training

CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=7542 \
  train.py -opt ./options/train/train_VideoFusion.yml --launcher pytorch

πŸ–ΌοΈ Qualitative Results

Fusion Quality (examples)

Qualitative comparison results on M3SVD and HDO datasets under degraded scenarios.

Quantitative comparison on the M3SVD and HDO datasets under degraded scenarios. Each video in M3SVD and HDO contains 200 and 150 frames, respectively. The best and second-best results are highlighted in Red and Purple, respectively.

Restoration / Robustness under Degradations

⏱️ Temporal Consistency

VideoFusion emphasizes temporal coherence. We provide temporal visualization examples:

Temporal variation of metrics on sequences.

Visual comparison of temporal consistency in source and fusion videos. Following DSTNet, we visualize pixels along selected columns (dotted line) and measure average brightness variation across frames.

πŸ“ˆ Ablation & Analysis

Ablation Study


🎯 Downstream / Tracking Demo


πŸ“ Citation

If you find this work useful, please cite:

@inproceedings{Tang2026VideoFusion,
  title     = {VideoFusion: A Spatio-Temporal Collaborative Network for Multi-modal Video Fusion and Restoration},
  author    = {Tang, Linfeng and Wang, Yeda and Gong, Meiqi and Li, Zizhuo and Deng, Yuxin and Yi, Xunpeng and Li, Chunyu and Zhang, Hao and Xu, Han and Ma, Jiayi},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2026}
}

❀️ Acknowledgments

This repository is built upon the excellent open-source framework BasicSR. We sincerely thank the authors for their great work and for making their code publicly available.

🀝 Contact

If you have any questions, please do not hesitate to contact linfeng0419@gmail.com.


About

This is official Pytorch implementation of "[CVPR 2026] VideoFusion: A Spatio-Temporal Collaborative Network for Multi-modal Video Fusion"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages