Skip to content

SoloChe/AnoFPDM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch implementation of AnoFPDM

AnoFPDM: Anomaly Segmentation with Forward Process of Diffusion Models for Brain MRI

Yiming Che1,2, Fazle Rafsani1,2, Jay Shah1,2, Md Mahfuzur Rahman Siddiquee1,2, Teresa Wu1,2,

1ASU-Mayo Center for Innovative Imaging, 2Arizona State University,

Abstract

Weakly-supervised diffusion models (DMs) in anomaly segmentation, leveraging image-level labels, have attracted significant attention for their superior performance compared to unsupervised methods. It eliminates the need for pixel-level labels in training, offering a more cost-effective alternative to supervised methods. However, existing methods are not fully weakly-supervised because they heavily rely on costly pixel-level labels for hyperparameter tuning in inference. To tackle this challenge, we introduce Anomaly Segmentation with Forward Process of Diffusion Models (AnoFPDM), a fully weakly-supervised framework that operates without the need of pixel-level labels. Leveraging the unguided forward process as a reference for the guided forward process, we select hyperparameters such as the noise scale, the threshold for segmentation and the guidance strength. We aggregate anomaly maps from guided forward process, enhancing the signal strength of anomalous regions. Remarkably, our proposed method outperforms recent state-of-the-art weakly-supervised approaches, even without utilizing pixel-level labels.

Dataset

We use Brain MRI dataset from Kaggle. The data is preprocessed and saved by preprocess.py. The preprocessed data is saved in the following structure:

preprocessed_data/
    ├── npy_train/
  	    ├──patient_BraTS2021_id/
	        ├── xxx.npy
    ├── npy_val/
  	    ├──patient_BraTS2021_id/
            ├── xxx.npy
    ├──npy_test/
  	    ├──patient_BraTS2021_id/
            ├── xxx.npy

Usage

The training is optimized for multi-GPU training using torchrun while the evaluation is for single-GPU. Here is the specific steps:

  1. preprocess the BraTS21 data by preprocess.py
  2. train the model by ./scripts/train.py with the corresponding config file
  3. segment the anomaly by ./scripts/translation*.py with the corresponding config file

Model:

We provide the training script for unguided DM, classifier-guided DM and classifier-free DM. You can run the training script by: sbatch on Slurm job scheduler or bash on local machine (please comment out any Slurm related setups).

Method traning config evaluation config
Unguided DM ./config/run_train_brats_anoddpm.sh ./config/run_translation_anoddpm.sh
Classifier-guided DM ./config/run_train_brats_clf_guided.sh, ./config/run_train_brats_clf.sh ./config/run_translation_clf_guided.sh
Classifier-free DM ./config/run_train_brats_clf_free_guided.sh ./config/run_translation_fpdm.sh (ddim forward in paper), ./config/run_translation_ddib.sh (ddib in paper)

About

Pytorch implementation of AnoFPDM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published