Official implementation of "VideoFusion: A Spatio-Temporal Collaborative Network for Multi-modal Video Fusion"
Linfeng Tang, Yeda Wang, Meiqi Gong, Zizhuo Li, Yuxin Deng, Xunpeng Yi, Chunyu Li, Hao Zhang, Han Xu, Jiayi Ma
- [2026] VideoFusion has been accepted to CVPR 2026.
- [2025] We release M3SVD, a large-scale aligned infrared-visible multi-modal video dataset for fusion & restoration.
Most multi-modal fusion methods are designed for static images. Applying them frame-by-frame to videos often leads to:
- Temporal flickering (inconsistent fusion across frames)
- Under-utilization of motion/temporal cues
The overall framework of our spatio-temporal collaborative video fusion network.
- 220 temporally synchronized & spatially registered IR-VI videos
- 153,797 frames total
- Registered resolution 640Γ480, 30 FPS
- Diverse conditions: daytime / nighttime / challenging scenarios (e.g., occlusion, disguise, low illumination, overexposure)
π Place dataset files following the dataloader requirement (see Dataset Preparation section).
π Download links will be updated: (TBD)
git clone git@github.com:Linfeng-Tang/VideoFusion.git
cd VideoFusionconda create -n videofusion python=3.9 -y
conda activate videofusion
pip install -r requirements.txt- Download pretrained weights: (TBD)
- Put weights into:
./pretrained_weights/
python test.py -opt=./options/test/test_VideoFusion.ymlDownload M3SVD and place it as:
<your_m3svd_root>/
βββ train/
β βββ ir/seqxxx/*.png
β βββ vi/seqxxx/*.png
βββ val/
β βββ ir/...
β βββ vi/...
βββ test/
βββ ir/...
βββ vi/...
Then update options/train/train_VideoFusion.yml with the correct dataset root paths.
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=7542 \
train.py -opt ./options/train/train_VideoFusion.yml --launcher pytorchQualitative comparison results on M3SVD and HDO datasets under degraded scenarios.
Quantitative comparison on the M3SVD and HDO datasets under degraded scenarios. Each video in M3SVD and HDO contains 200 and 150 frames, respectively. The best and second-best results are highlighted in Red and Purple, respectively.
VideoFusion emphasizes temporal coherence. We provide temporal visualization examples:
Temporal variation of metrics on sequences.
Visual comparison of temporal consistency in source and fusion videos. Following DSTNet, we visualize pixels along selected columns (dotted line) and measure average brightness variation across frames.
π Ablation & AnalysisIf you find this work useful, please cite:
@inproceedings{Tang2026VideoFusion,
title = {VideoFusion: A Spatio-Temporal Collaborative Network for Multi-modal Video Fusion and Restoration},
author = {Tang, Linfeng and Wang, Yeda and Gong, Meiqi and Li, Zizhuo and Deng, Yuxin and Yi, Xunpeng and Li, Chunyu and Zhang, Hao and Xu, Han and Ma, Jiayi},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2026}
}This repository is built upon the excellent open-source framework BasicSR. We sincerely thank the authors for their great work and for making their code publicly available.
If you have any questions, please do not hesitate to contact linfeng0419@gmail.com.











