Skip to content
/ LED Public

Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement

License

Notifications You must be signed in to change notification settings

QtacierP/LED

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement

The official implementation of the paper Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement

Highlights

  • Continuous and reliable enhancement

image info

  • Flexable and effectively integrated into any SOTA

image info

image info

  • Robust for OOD low-quality images

image info

image info

  • SOTA performance

image info

Start LED with few lines

from led.pipelines.led_pipeline import LEDPipeline
led = LEDPipeline()
led.cuda()
led_enhancement = led('./doc/example.jpeg')[0]

Furthermore, you can combine LED with any existing SOTA methods as external backend. Current supported backends include:

Try I-SECRET backend with only one line

led = LEDPipeline(backend='I-SECRET', num_cond_steps=200)

For more details, please read example.ipynb. Please feel free to pull your proposed fundus enhancement methods as backend.

Catalog

  • Training guidance
  • Support for ArcNet and SCRNet
  • Add related codes for data-driven degradation
  • Inference pipeline

Train

For training your own LED, you need to update few lines in configs/train_led.yaml

    train_good_image_dir: # update to training hq images directrory
    train_bad_image_dir: # update to training lq images directrory
    train_degraded_image_dir: # update to training degraded images directrory
    val_good_image_dir:  # update to validation hq images directrory
    val_bad_image_dir: # update to validation lq images directrory

Please note that train_degraded_image_dir should contain degraded high-qualty images by any data-driven methods. We will inculde related codes in our future workspace. However, you can consider using some existing repos instead, like CUT or CycleGAN.

To train LED, simply run

accelerate launch  --mixed_precision fp16 --gpu_ids 0 --num_processes 1 script/train.py 

More GPUs will take significant performance improvement.

Acknowledgement

Thanks for PCENet, ArcNet and SCRNet for sharing their powerful pre-trained weights! Thansk for diffusers for sharing codes.

Citation

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{cheng2023learning,
  title={Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement},
  author={Cheng, Pujin and Lin, Li and Huang, Yijin and He, Huaqing and Luo, Wenhan and Tang, Xiaoying},
  journal={arXiv preprint arXiv:2303.04603},
  year={2023}
} 

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

About

Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement

Resources

License

Stars

Watchers

Forks

Packages

No packages published