Skip to content

Rowerliu/ADD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ADD

Generating Progressive Images from Pathological Transitions via Diffusion Model
Zeyu Liu, Tianyi Zhang, Yufang He, Yu Zhao, Yunlu Feng, Guanglei Zhang
Arxiv, GitHub

🗃️ Overview

Deep learning is widely applied in computer-aided pathological diagnosis, which alleviates the pathologist workload and provide timely clinical analysis. However, most models generally require large-scale annotated data for training, which faces challenges due to the sampling and annotation scarcity in pathological images. The rapid developing generative models shows potential to generate more training samples from recent studies. However, they also struggle in generalization diversity with limited training data, incapable of generating effective samples. Inspired by the pathological transitions be-tween different stages, we propose an adaptive depth-controlled diffusion (ADD) network to generate pathological progressive images for effective da-ta augmentation. This novel approach roots in domain migration, where a hybrid attention strategy guides the bidirectional diffusion, blending local and global attention priorities. With feature measuring, the adaptive depth-controlled strategy ensures the migration and maintains locational similarity in simulating the pathological feature transition. Based on tiny training set (samples ≤ 500), the ADD yields cross-domain progressive images with corresponding soft-labels. Experiments on two datasets suggest significant improvements in generation diversity, and the effectiveness with generated progressive samples are highlighted in downstream classifications.

🗃️ Usage

Generating a sequence of intermediate images between source domain and target domain

  1. Train a diffusion model on your data based on the guided-diffusion
  2. Assign the path of trained models, and then generate intermediate images (The total diffusion process includes 1000 steps, and we can get 10 intermediate images)
python scripts/frequency_generating_m_samples.py --diffusion_steps=1000 --amount=10`

🗃️ Acknowledgements

This implementation is based on / inspired by:
openai/guided-diffusion
openai/improved-diffusion
suxuann/ddib

🗃️ Materials

The comparison methods are listed here:

Model Based method Paper Code
ProGAN GAN Progressive Growing of GANs for Improved Quality, Stability, and Variation Github
IDDPM Diffusion Improved Denoising Diffusion Probabilistic Models Github
LoFGAN GAN LoFGAN: Fusing Local Representations for Few-shot Image Generation Github
MixDL GAN Towards Expert-Level Medical Question Answering with Large Language Models Github

About

Adaptive depth-controlled bidirecional diffusion

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages