Skip to content

[CVPR2024] Diffusion-based Blind Text Image Super-Resolution (Official)

Notifications You must be signed in to change notification settings

YuzheZhang-1999/DiffTSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Diffusion-based Blind Text Image Super-Resolution (CVPR2024)   

Yuzhe Zhang1 | Jiawei Zhang2 | Hao Li2 | Zhouxia Wang3 | Luwei Hou2 | Dongqing Zou2 | Liheng Bian1

1Beijing Institute of Technology, 2SenseTime Research, 3The University of Hong Kong

📢 News

  • 2024.05 🚀Inference code has been released, enjoy.
  • 2024.04 🚀Official repository of DiffTSR.
  • 2024.03 🌟The implementation code will be released shortly.
  • 2024.03 ❤️Accepted by CVPR2024.

🔥 TODO

  • Attach the detailed implementation and supplementary material.
  • Add inference code and checkpoints for blind text image SR.
  • Add training code and scripts.

👁️ Gallery

🛠️ Try

Dependencies and Installation

  • Pytorch >= 1.7.0
  • CUDA >= 11.0
# git clone this repository
git clone https://github.com/YuzheZhang-1999/DiffTSR
cd DiffTSR

# create new anaconda env
conda env create -f environment.yaml
conda activate DiffTSR

Download the checkpoint

Please download the checkpoint file from the URL below to the ./ckpt/ folder.

Inference

python inference_DiffTSR.py
# check the code for more detail

🔎 Overview of DiffTSR

DiffTSR

Abstract

Recovering degraded low-resolution text images is challenging, especially for Chinese text images with complex strokes and severe degradation in real-world scenarios. Ensuring both text fidelity and style realness is crucial for high-quality text image super-resolution. Recently, diffusion models have achieved great success in natural image synthesis and restoration due to their powerful data distribution modeling abilities and data generation capabilities In this work, we propose an Image Diffusion Model (IDM) to restore text images with realistic styles. For diffusion models, they are not only suitable for modeling realistic image distribution but also appropriate for learning text distribution. Since text prior is important to guarantee the correctness of the restored text structure according to existing arts, we also propose a Text Diffusion Model (TDM) for text recognition which can guide IDM to generate text images with correct structures. We further propose a Mixture of Multi-modality module (MoM) to make these two diffusion models cooperate with each other in all the diffusion steps. Extensive experiments on synthetic and real-world datasets demonstrate that our Diffusion-based Blind Text Image Super-Resolution (DiffTSR) can restore text images with more accurate text structures as well as more realistic appearances simultaneously.

Visual performance comparison overview

DiffTSR Blind text image super-resolution results between different methods on synthetic and real-world text images. Our method can restore text images with high text fidelity and style realness under complex strokes, severe degradation, and various text styles.

📷 More Visual Results

DiffTSR

DiffTSR

DiffTSR

DiffTSR

DiffTSR

DiffTSR

🎓Citations

@inproceedings{zhang2024diffusion,
  title={Diffusion-based Blind Text Image Super-Resolution},
  author={Zhang, Yuzhe and Zhang, Jiawei and Li, Hao and Wang, Zhouxia and Hou, Luwei and Zou, Dongqing and Bian, Liheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={25827--25836},
  year={2024}
}

🎫 License

This project is released under the Apache 2.0 license.

Acknowledgement

Thanks to these awesome work:

Statistics

visitors

About

[CVPR2024] Diffusion-based Blind Text Image Super-Resolution (Official)

Resources

Stars

Watchers

Forks

Packages

 
 
 

Languages