FAPE-IR: Frequency-Aware Planning and Execution Framework for All-in-One Image Restoration [CVPR 2026 Accept]
[Paper (arXiv)] Jingren Liu*, Shuning Xu*, Qirui Yang*, Yun Wang, Xiangyu Chen✉, Zhong Ji✉
*Equal contribution ✉Corresponding author
This repo is an early exploration of unified image restoration (unified understanding & generation). We are actively investigating more lightweight designs and alternatives beyond the MLLM + Diffusion paradigm, and will continuously maintain and update this repo with new progress.
🚧 Coming Soon (Open-sourcing in progress).
We are preparing:
- clean and reproducible code (training / inference / evaluation)
- pretrained checkpoints
- documentation and scripts
Please star this repo to get updates.
- ✅ Nov 25, 2025. Release the arXiv paper.
- 🚧 TBD. Release inference code.
- 🚧 TBD. Release training code.
- 🚧 TBD. Release pretrained checkpoints & model zoo.
- 🚧 TBD. Release evaluation scripts and example results.
- Release inference code
- Release training code & configs
- Release evaluation scripts
- Release pretrained checkpoints
- Release documentation & scripts
- Release example results
| Item | Link |
|---|---|
| Pretrained checkpoints | TBD |
| Testset (GT/LQ) | TBD |
| Visual results | TBD |
| Compared methods | TBD |
conda create -n fapeir python=3.11 -y
conda activate fapeir
pip install -r requirements.txtSingle image
python inference.py --input ./examples/0001.png --output ./resultsFolder
python inference.py --input ./examples --output ./resultspython eval.py --inp_imgs ./results --gt_imgs ./dataset/GT --save_dir ./logsbash train.sh- Quantitative Results
- Qualitative Results
If you use this work, please cite:
@article{liu2025fape,
title={FAPE-IR: Frequency-Aware Planning and Execution Framework for All-in-One Image Restoration},
author={Liu, Jingren and Xu, Shuning and Yang, Qirui and Wang, Yun and Chen, Xiangyu and Ji, Zhong},
journal={arXiv preprint arXiv:2511.14099},
year={2025}
}We thank all collaborators and colleagues for their helpful discussions and support. We especially thank Dr. Chen Xiangyu and Professor Ji Zhong for their guidance and revisions to this work.
If you have any questions, feel free to reach out:
- Email: jrl0219@tju.edu.cn


